In a recent post, fellow blogger Bentham’s Bulldog argued that even a utilitarian should worry about animals having rights, since even utilitarians aren’t certain that deontology is false, and so this gives us extra reason to be vegans. This is also something he has argued for previously. While I agree on the conclusion—that we should be vegans—I don’t think this line of argument from moral risk should move us very much.
Before getting into the weeds, I just want to add that this is nothing personal—in fact I am a big fan of Matthew’s (as you can probably tell if you have read my blog previously), and everyone reading this should go subscribe to his blog immediately if you aren’t already. But despite our almost complete convergence on most philosophical issues (owing to us both having godlike intelligences), I think he gets this one quite wrong.
The Argument from Moral Risk
Let me just first try to recap what his argument is. Essentially, even if you are very confident in utilitarianism, you probably still have some non-trivial credence in some form of deontology, and some non-trivial credence in animals having rights/you having a duty not to kill animals, given deontology. Let’s say that you have a 10% credence in deontology and a 10% credence in animals having rights given deontology, meaning you have a 1% credence in animal rights (and assume that buying meat amounts to killing animals).
Matthew now tells us to use the standard procedure for making decisions under uncertainty: calculating the expected outcome. You do this by summing the values of each possible outcome multiplied by their respective probabilities. Now, for simplicity let’s say that there are three possible outcomes for action A (say, paying someone to slaughter a cow for you): (a) utilitarianism is correct, (b) deontology is correct and the cow has no rights (c) deontology is correct and the cow has rights. Let’s say that on (a) the outcome is +10 utils (perhaps the cow has a good life on the whole), on (b) it is 0 (since deontology doesn’t care about things without rights), and on (c) it’s the equivalent of -10000 utils, since breaking rights is really bad on deontology. Plugging the numbers in, we get:
So even if we are highly confident in utilitarianism, and the animals we are eating are living good lives, the expected outcome of our actions is still very low. In other words, you can be very confident that an action is not wrong, and still think that the “expected rightness” of the action is negative, meaning it would be wrong to perform it regardless—on the slim chance that it is wrong, it would be very wrong. These numbers are of course plucked out of thin air, but you get the idea.
If we are not comfortable with converting rights-violations to utils, we can also just look at the rights part of the equation. The average person eats around 7000 animals throughout their life. This means that given the odds of animal rights above, this is equivalent to violating the right to life of 70 moral patients. Even if we suppose that all these 7000 animals had lives worth living (which is highly unlikely), it is still equivalent to a scenario like this:
(Deontology is true) You are a very skilled midwife, and throughout your career you help give birth to 7000 humans with lives worth living, who would not have made it otherwise. You have some… peculiar tastes, though, and so throughout your career you decided to kill and eat 70 of the babies (of course only after they have had a few months of good life—you’re no monster, after all).
Given deontology the killing and eating of babies seems to completely outweigh the good you may have done. At the same time, this is roughly what even the extremely confident utilitarian should take themselves to be doing if they eat “ethically sourced” meat.
Problems Rearing their Ugly Heads
This all sounds well and good, but I think there is a big flaw with this approach: that is not how you should calculate expected outcome under uncertainty on the normative-ethical level. To see this, let’s discuss some absurdities this method leads to.
Intertheoretical Comparison of Value
If you are like me, you were probably a bit skeptical when I converted rights-violations to utils. I of course said that we could also make the argument without doing such a conversions, but I was… well, lying (luckily my credence in the Kant’s ethics is exactly 0). The thing is, when deciding which action to take, you (ideally) have to take all relevant considerations into account; I can't argue that dentists act wrongly all the time, due to them causing pain to all their patients, because I also have to take into account the good of what they are doing. Likewise, when considering the rightness of an action intertheoretically, I cannot say it’s wrong due to it possibly violating rights, since I don’t know to what extent utilitarian considerations outweigh it. So I will need to find some way to weigh up the values of different theories on a single scale.
But this looks like an impossible task, I think. What is the exchange rate between the pain of 10 kicks in the groin on hedonistic utilitarianism as opposed to the rights-violation of stealing someone’s phone on patient-centered deontology? You might object that some theories already do this. For example, threshold-deontology holds that rights/duties can in principle be outweighed by consequences. The problem is that these theories do this within a single theory, but we are asking for comparisons across theories. In fact this raises an even more problematic point:
Consider two threshold-deontologies. On TD1, one violation of a right to life can be outweighed by the pleasure generated by saving of 1000 lives, whereas on TD2, one such violation can be outweighed by the saving of 10000 lives. How do we compare the value of rights and utility on these theories respectively. Is the difference that pleasure is worth 10x as much on TD1 or that rights are worth 10x as little on TD1? It looks like there is no right answer—it doesn’t even make sense to make this sort of comparison, I think. But we need to be able to make it in order to find the expected outcome of our actions under normative uncertainty, given the model above.
Let’s say we have an action that we are considering whether to perform: Kill 1 person to save 5000. Suppose we say that the value of pleasure is constant across both theories. We now convert rights and pleasure into theory-neutral units (TU’s). Let’s say that 1TU=1P, (where P is the amount of pleasure produced by saving a single life,) meaning we hold the value of pleasure constant across theories. This means that a rights-violation on TD1 equals 1000TU, and on TD2 it equals 10000TU. We then get:
Suppose now instead that we hold the value of rights violations constant across theories, let’s say 1R=-1000TU, (where R is rights violations). That means that one life of pleasure on TD1 equals 1TU, whereas it equals 0.1TU on TD2. We then get:
So we get different outcomes depending on whether we hold the value of pleasure or the value of rights constant across theories. But what reason could we possibly have for deciding which is constant across theories? This is all further complicated when we add more different kinds of values into the equation.
The paper In Defence of My Favourite Theory by Johan E. Gustafsson and Olle Torpman also does a nice job outlining problems with these sorts of intertheoretical comparisons.
Decision Dominators
A perhaps more serious problem for this model is that it means that no one should be consequentialists, almost regardless of how confident they are in the view. This is because some views are what I’ll call “decision dominators”. These completely dominate the decision making process of anyone who has a non-zero credence in them.
One such view is rights-absolutism. On this view, consequentialist considerations cannot in principle outweigh rights-violations—rights considerations have lexical priority over consequentialist considerations. Now, this sort of view is incredibly implausible, I think, but still I think almost no one has literally zero credence in the view. If you had zero credence in it, that would mean that even if God transported you to heaven and told you, to your face, that it was correct, as well as gave you 10 supremely persuasive arguments for the view, you would not be moved one bit. It is not even that you would have to end up believing the view—but surely your credence would at least increase a little. Or if not for this exact view, then surely for some other decision dominating view. Would you really bet your entire life-savings and an eternity in hell on no decision dominating view being correct, for the possibility of winning $1? If you have a credence of zero in such a view being correct, then that should be easy money!
But a non-zero credence is all that is needed for this view to dominate your decision-making process. Since rights have absolute priority, an action having even the slightest chance of violating a right, regardless of the unimaginably good consequences it would produce, would make that action immediately disqualified, even if you were 99.9999% confident in utilitarianism.
Even apart from extreme views, more moderate forms of deontology still give very much weight to rights (one intended killing requires, hundreds, thousands, if not millions of lives saved be permissible), meaning that even a moderate credence in deontology (say 0.05) would mean that in almost all circumstances you come across, you should be acting like a deontologist, which is bad news for people who want to be utilitarians.
Now, Matthew says that his argument isn’t supposed to lead to taking very small risks seriously, but I just don’t see why we shouldn’t. On his model, as I read it, it follows quite straightforwardly that we should take these small probabilities into account. I don’t see some principled reason for why we can discount these decision dominators on his view, meaning we should all become absolutist deontologists.
Perhaps there is just some cutoff point where we stop taking small risks seriously, but that just seems horribly arbitrary and like a huge cost to the theory. Furthermore, imagine you are playing a roulette with n options, numbered 1 to n. For each option, it will result in an amount of people equal to it’s number being killed. Now, as you increase n, the expected outcome of playing becomes worse and worse. That is, until 1/n reaches the threshold for discounting probabilities, at which point the expected outcome becomes neutral. This seems absurd! You are certain that someone will be killed, but you should expect that no one will be killed, since you have a credence of 0 in each outcome.
Or imagine that you have a wheel where n is such that 1/n is below the threshold. In this case, all options do nothing, except number 1, which kills 1 person. The expected outcome of playing once is neutral. But if you play sufficiently many times, the probability of someone dying will go above the threshold, meaning the expected outcome becomes negative. But that means that each individual roll is neutral, but the conjunction is negative, which also looks absurd.
Finally, suppose that the outcome of landing on 1 is that you loose n+1 dollars, and if you don’t land on 1, you win 1 dollar. Given this, you should bet given the cutoff view, even though this will on average lose you money. In fact, you should be willing to pay up to $1 to play.
Of course, there might be some way of avoiding these very low probabilities that doesn’t fall prey to these sorts of absurdities, though I’m skeptical.
A Difference Between Expected Rightness and Action-Guiding Rightness?
Or perhaps I am misinterpreting what Matthew’s argument actually is. Perhaps he wants to say that you should still perform the action that is most likely to be right—it’s just that that action might also have an incredibly negative “expected rightness”. For example, it might be that you are 99% sure that it would be right to push the man in the footbridge variation of the trolley problem, meaning you should do it, but if it’s wrong, it’s extremely wrong, meaning the overall expected rightness is negative. Two points on this:
First, I am just not sure what this “expected rightness” is even tracking at this point then. It looks to me like expected outcome is supposed to be action-guiding, telling us what is the best action to perform. Here it is no longer doing that. But what is this a measure of then? How bad I should feel about myself for doing the right thing? I just don’t know how to make sense of it.
Secondly, this would seem to undermine the point of bringing up moral risk at all. After all, I interpret Matthew as raising moral risk as an argument for becoming a vegan. But if this expected rightness does not tell us anything about what to do, then saying that eating meat has lower expected rightness than we might initially think should have no effect on how we conduct ourselves, meaning it would be no argument for veganism at all!
Now, I don’t think this is what Matthew means to argue for, but I just wanted to cover it, in case I were mistaken.
My View
I can of course criticize as much as I want, but if I don’t come up with an alternative, Matthew’s view just wins by default. Luckily, I think there are more plausible alternatives. For one, the paper mentioned earlier gives the following account:
An option 𝑥 is a morally conscientious choice for 𝑃 in 𝑆 if and only if
𝑥 is permitted by a moral theory 𝑇 such that
𝑇 is in the set 𝑈 of moral theories that are at least as credible as every moral theory for 𝑃 in 𝑆 and
𝑃 in 𝑆 has not violated 𝑇 more recently than any other moral theory in 𝑈, and
there is no option 𝑦 and no moral theory 𝑇 ′ such that
𝑇 ′ permits 𝑦 and 𝑇 ′ does not permit 𝑥 and
there is no moral theory 𝑇 ″ such that 𝑇 ″ is at least as credible as 𝑇 ′ for 𝑃 in 𝑆 and 𝑇 ″ permits 𝑥 and 𝑇 ″ does not permit y.
If all else fails, I think this account is better than the one discussed above, but I also think we can do even better, as this seems pretty contrived to avoid some objections that I think we can avoid more simply. My account is basically this:
The intertheoretical “rightness” of an action a for an agent in a given situation is calculated by the following formula:
Here PTi is the agent’s credence in theory i, and VTi is a value from 0 to 1, which the theory ascribes to action a.
This notion of value ascribed should probably be expanded a bit. What is meant is the degree to which theory T prefers a. Say you are a utilitarian and a1 has an expected outcome of 9 net utils and a2 has one of 1 net util. In that case PT for a1 is 0.9, and PT for a2 is 0.1. If T is rights absolutism and a1 kills someone intentionally, where a2 produces a million utils, then PT for a1 is 1 and PT for a2 is 0. And if T is rights absolutism and a1 kills two people, where a2 kills one person, then PT for a1 is 1/3 and PT for a2 is 2/3.
With this model, the rightnesses for all possible actions available to an agent at a given time add up to 1, and we get a measure for the rightness of each action, and are able to deliberate about actions intertheoretically. Importantly, this model avoids having to compare values across theories, since we only need the degree of preference within a theory. Furthermore, we avoid the problem of decision dominators because no theory can have an influence greater than your credence in it. A way to look at it is that normative-ethical credences come before descriptive credences.
With this view of the model in mind, it doesn’t just look like a contrived model to avoid the problems from before—it also makes intuitive sense, I think. After all, what is it that we are calculating when calculating expected outcome? Well, it's the value of an outcome, multiplied by its probability. But this value is theory-relative, so it makes sense that it should not be able to cross normative-theoretical boundaries in probability space.
I also thinks this model naturally avoids the objections raised to the theory in the paper mentioned earlier. This article is already long enough, and I’m tired, so I don’t want to go into it now, but I’m pretty certain it does (trust me, bro). I will mention one of the objections though:
Imagine that you are almost equally confident in T1 and T2, though slightly more confident in T1. You are faced with the choice between 2 actions. T1 ever so slightly prefers a1 to a2, but T2 has an extreme preference for a2 over a1. We can model it something like this:
Intuitively, we should choose action 2 because it is just so much worse on T2, and you are almost neutral between the theories. But on the view from the paper mentioned, where “my favourite theory” automatically wins, we should choose a1. My model gives the “right” answer though.
And of course most importantly in this context: this model means that utilitarians should rarely if ever care about rights, assuming they are pretty confident, meaning the argument from moral risk shouldn’t do much to persuade utilitarians to veganism (though they should of course be vegans for other reasons). Now, if you are already a deontologist and uncertain about whether animals have rights, then you should buy arguments from moral risk, since we are here talking about descriptive uncertainty rather than normative uncertainty, meaning standard expected outcome applies.
Conclusion
I hope I haven’t sounded too mean in this article, as that is certainly not my intention, and I hope I have given a fair treatment to Matthew’s views! I am very fond of him and his stuff, and he is a big inspiration for my writing, as well as the primary reason why I currently have more subscribers than family members (thank you)! But these sorts of arguments from moral risk have been bothering me for some time, so I thought it was about time I said something about it—no wrong take shall go unpunished!
I wasn't intending to give a general theory abotu interpersonal comparisons of rightness, in part because I don't think there's a general theory and I share many of your concerns. The principle is just that if there's a sizeable chance that something is super duper wrong--about as bad as being the most prolific serial killer in history--you shouldn't do it absent very strong reasons.
Even on your model, the argument goes through. A rights believing deontologist will have most of their theories points allocated to not being as bad as a mass serial killer, and thus not eating animals.
Hey, I'm curious, are there any philosophers who think that objective list utilitarianism and prima facie rights can go together? Or are those two ideas totally incompatible?