Yana Chidiebele has been captured by an evil scientist who has burdened her with a terrible curse: With his incredible science-powers, he has made a grisly machine that detects every time Yana scratches her left butt cheek, and then horribly murders a million sweet old ladies and cute babies! Even though scratching your butt usually doesn’t seem like a very bad thing, due to these contingent facts about Yana’s imaginary world, it turns out that it is an extremely terrible thing to do. I hope you agree that, in this imaginary world, it would be wrong for Yana to scratch her left butt cheek.
But now suppose that you wake up tomorrow in the actual, real, genuine world (yes, fr), and find out that this sadistic maniac of a scientist is not just a mere figment of my twisted imagination, but is in fact flesh-and-blood-real. To make matters worse, he has cursed you, my valued reader, with the horrible burden Yana had to carry before. The fact that the mechanism is now actual, rather than imaginary, shouldn’t change the moral character of the situation. That means that even though, prior to learning these contingent empirical facts, scratching your butt seemed totally fine, it actually turns out to be incredibly wrong. The lesson here is that whether or not some action is wrong, or state of affairs bad, in a given possible world doesn’t depend on whether that world is the actual world.
Sadly this lesson seems to have been lost on some. Without mentioning any names, I’m specifically thinking of recent punching bag for animal-welfarist-Substack, Lyman Stone, who has produced an impressive barrage of terrible arguments for not caring about shrimp. I won’t address most of his arguments here, since much digital ink has been spilt on doing so elsewhere. Rather I’ll focus on a particularly embarrassing argument he makes here. Basically he argues that given certain assumptions that animal-welfarists would probably grant, we get the conclusion that:
Animal welfarists likely believe, explicitly or implicitly, that 100% of human welfare gains from social progress since 1800 have been offset by lost animal welfare.
This is because humans have exploited animals on an industrial scale over the past 200-or-so years, meaning that if animals matter, this might very well outweigh how nice human lives have gotten. This is then supposed to be a reductio against the animal-welfarist position. I won’t challenge any of the assumptions he makes, or any of his calculations. Rather I just think that the inference he makes is similar to insisting that it can’t be wrong to scratch your butt in the above example, and so inferring that the effect won’t happen, or that the people killed are actually morally worthless.
The reason why it seems so obvious that scratching your butt right now is a-okay is that you presume that there are no negative “externalities” to this action. But once you are made aware of the extra fact that an evil scientist turns this action into a horrible act of genocide, that assumption is defeated, and you can no longer trust your initial judgement. Likewise, the reason that it seems obvious that social progress has been a net good, is that we usually just focus on the human part of the equation, and don't consider externalities. When we assume that there haven't been negative externalities, then it is indeed pretty obvious that social progress has been good. But realizing that human civilization has been the cause of extreme amounts of suffering to other species defeats this assumption, and so you can no longer trust your prima facie judgement.
Or diagnosed a little more carefully, Lyman seems to make the following inference:
Social progress is obviously a net good.
If animals deserve moral consideration, then social progress is not obviously a net good.
Therefore animals do not deserve moral consideration.
But this argument is obviously fallacious! Your justification for (1) is that you assume that (3) is true, and so you cannot infer from (1) to (3). The problem is that we do not have direct insight into the moral valence of complicated issues in the real world, but must infer it from moral principles plus empirical data. Depending on your metaethics you might think that we have insight into the valence of acts and situations given we stipulate the facts in a thought experiment (e.g. that some action causes pain in a conscious agent, and produces no positive outcome), from which we extrapolate more general principles (i.e. normative ethical theories). Or maybe you think we construct these principles from our attitudes, or some third thing.
Either way, the only way we can come to a moral judgement about some situation is assuming that the non-moral facts are a certain way (e.g. that shrimp aren’t conscious). If it then turns out upon further empirical investigation that the facts are probably different than first assumed (e.g. that shrimp are conscious), then we should no longer have any confidence in our previous judgement—and we definitely shouldn’t throw out the empirical evidence or moral principles in order to preserve the moral judgements. After all, we already knew that in the possible world where, say, shrimp are conscious and suffer 1/5 as much as humans, and where eating shrimp causes them harm, eating shrimp would be wrong. Merely finding out that that world is the actual world should in no way change our attitude towards this world. So it’s just completely inane to point to the surface level appearance of goodness of social progress, and inferring all sorts of contentious conclusions about the moral worth of certain creatures, as the valence of social progress is epistemically downstream of the facts about the moral worth of creatures.
I actually don’t think this type of mistake is particular to Lyman Stone. Rather it seems like many people are opposed to accepting that some seemingly innocuous practice is actually bad, sooner giving up on moral principles which they would have gladly accepted were they thinking about some distant possible world, rather than the actual world. Unsurprisingly this seems to be most common when it comes to animal welfare and mealtime, but people generally seem unwilling to acknowledge that their purchasing decisions (or lack thereof, when it comes to charity) could be deeply immoral, as they just seem so banal when performed.
On a less polemical note, friend of the blog Amos Wollen recently-ish made an argument from deontology to theism. Basically, it turns out that if deontology is right, it might be wrong to move ever, since you risk being the cause of many people’s deaths through the butterfly effect and all that. But if theism is true, God could make it such that you wouldn’t cause thousands of deaths by shaking your head. So because it’s super implausible that it’s wrong to move at all, deontologists should be theists.
I think Richard Y Chappell correctly diagnosed this argument as making the mistake described in this post. That is, conditional on deontology, you should judge that it would be wrong to move in the possible world with these complicated causal chains. When it then turns out that the best evidence is that our world does contain these chains, you should then think that it’s wrong to move, since the moral valence of moving in some world doesn’t depend on whether that world is actual—only the content of the world matters.
Perhaps you can’t accept that moving the tiniest bit is incredibly immoral (judging from the fact that you clicked on this post, you don’t). In that case you should simply think that deontology is false (or that some step in the argument is wrong). After all, all you have found out is that the world where it’s wrong to move is the actual world, but which moral theory is true (if any) shouldn’t depend on contingent facts about the causal structure of the actual world.
In either case the wrong inference to make is to theism, as whether it's permissible to move in the actual world isn't a data point. What is a data point is whether or not it's permissible to move in a world with such and such features, and you can make inferences from that. If you discover that the actual world has such and such features you can also make inferences about whether you may actually move. But you can't know a priori that you may actually move, and so can’t infer that the world doesn't have such and such features.
This leads into a small technicality about what I’ve been saying. The thing is, conditional on you already accepting some sort of axiarchism, such as theism (or really any view where the moral facts predict non-moral facts), you can infer away from some really morally implausible possible world being the actual world. But this inference only works with axiarchism in the background, and you can’t make the reverse inference that Amos is trying to get away with.
An analogy: You have strong reason to believe theism. Additionally, you think that theism strongly predicts that a stag burning to a crisp all alone in a forest fire would be turned into a P-zombie by God so it doesn’t suffer unnecessarily. Thus you infer that stags do turn into P-zombies in such cases. This is all fine and good, given the above stipulations. But now you notice a remarkable fact about the world: Stags are turned into P-zombies when they burn in forest fires. What an incredible discovery! This is super implausible given atheism, but strongly predicted given theism, so you infer that theism is very probably true. This last step is clearly wrong, as the P-zombie fact was an inference from a background theory, and so can’t be used to then reconfirm that theory.
Likewise Amos can infer, given deontology and theism, that we live in a world where we may move. But atheism predicts that we may not move, given deontology. And since we have no direct insight into the fact that we may move in the actual world, an atheist should simply infer that we may not move, given deontology, not that theism is true.
You might think that this provides an out for Lyman: Being a Christian, he shouldn’t expect God to allow social progress to be a net negative. But not so fast! He will also need to provide a theodicy to explain why God allows humans to do incredibly bad things to each other, and it seems highly suspect to think that the best such theodicy would not also predict that we could do great harm to animals. Whatever could explain why humans are allowed to do things like the Holocaust would surely also predict that we would be allowed to do the shrimp-Holocaust.
You Might Also Like:
Making ChatGPT Vegan
This week I’m a little pressed for time, so I thought it would be fun to have a philosophical conversation with ChatGPT. I cho…
The "literally all people are so morally bad that God will justly torture them for eternity barring some exceptional intervention but also any normal human cultural practice is presumptively moral" stance is a very odd one.
> but people generally seem unwilling to acknowledge that their purchasing decisions (or lack thereof, when it comes to charity) could be deeply immoral, as they just seem so banal when performed
This is an interesting subject. I've noticed that people - even the most politically unengaged, uncritical layperson who's never taken an ethics class in their life - seem pretty willing to accept and understand your position if you say something like: "I'm not going to buy shoes from Nike because they use sweatshops, and I'm not going to buy products from Nestle because they abuse poor people". They might even sheepishly acknowledge that doing so themselves is a kind of moral weakness. But I think if you said the same about meat, because of animal welfare, there's a lot of people who will speak out against that and stereotype you as a judgy vegan.
It seems to me that there's a ton of cultural baggage around the idea of not eating meat for ethical reasons. (For ethical reasons specifically - people who are vegetarian because they are Buddhist don't face this kind of backlash.) And I think the well of discourse on that topic has been deeply poisoned, even in more academic circles.
All that said, whether it's Nestle or meat, people are very hard to convince to *actually* give it up, no matter how many sheepish acknowledgements they make. There is something lacking in the moral motivation there.
(previous comment is the same text, deleted because I forgot to share it to Notes)