For a few years I have been in a dogmatic threshold-deontological slumber, but over the past 6-ish months I’ve been reconsidering this commitment, and am finding myself more and more attracted to some sort of consequentialism.1 Here I’ll try to report some of the stuff that has made me reconsider things, and made me more sympathetic to Big Utility.
There are of course many, nay myriad, nay manifold, arguments to be given back and forth; and ink has been spilt by the bucketful doing exactly this. While there will be a bit of the standard type of argumentation here as well, I would really like to focus more on the deeper reason why I think consequentialism might very well be true: it gets morality fundamentally right in a way that other views don’t and can’t.
Getting the Boring Stuff out of the Way
I haven’t even said what consequentialism is yet, though, and because you’re so damn stupid, I suppose I will have to explain it to you. Very well. I quite like Philip Pettit’s characterization in the Blackwell’s A Companion to Ethics entry on Consequentialism. The idea is that on consequentialism, the right action is always the action that best promotes the good, whereas for non-consequentialisms it will sometimes be right to honor the good, rather than promote it. For example, for a non-consequentialist it might be that you shouldn’t murder one person to save five, even thought it would be better for the five to survive than for the one. The reason is that by murdering someone, you’re not honoring what’s good, even if that thereby results in more good overall.
This is kinda vibes-based, but here’s a more precise definition of consequentialism: out of some set of possible actions, it is right to take (one of) the action(s) that brings about the best consequences in expectation.2 Specifically I mean maximizing consequentialism, i.e. that the best consequences are those that bring about the most good—that you should take the action with highest expected value. I might additionally be tempted to say that rightness is on a spectrum rather than binary, and that actions are right in proportion to the good they’re expected to produce. But that’s a small detail; summa summarum: out of any two actions, you should morally prefer the one that is expected to produce most good.
To fill out this theory we also need a theory of what the good is. I’ll discuss this a bit later, but for now I’ll spoil the answer (read at your own risk): I tentatively think that the good is the satisfaction of desires.
Getting Morality Right
We’re then back to my central claim: that consequentialism gets morality right. What I’m getting at here is that morality is essentially about moral subjects—the reason I should act morally is because I should care about other conscious beings. I think this should be pretty intuitive. If you hit me in the face, or don’t subscribe *cough cough*, the reason it’s wrong is because someone (me) was seriously harmed or had their interests thwarted; it doesn’t have anything to do with all kinds of other things.
But any theory that isn’t consequentialist will necessarily end up sometimes favoring other considerations over the interests of subjects. For example, a deontology will sometimes tell you not to do what would be in people’s interests because there is some duty it would be against. In that case, you are weighing duties over the interests of subjects.
“How can I know that any non-consequentialist theory will necessarily weigh other things over the interests of moral subjects?” you ask with typical disdain and incredulity. Well, we can simply consider a consequentialism that identifies the good with the interests of moral subjects. Any deviation from this view will involve weighing something higher than these interests, since it will sometimes require that you let them be thwarted in favor of some other consideration. But by definition any non-consequentialist view will have to deviate from this view, and must then sometimes weigh something over these interests.
This fundamental attitude of sometimes letting interests of subjects be outweighed just seems to fly in the face of morality to me. When asked why you didn’t take some series of actions that would have made everyone better off, saying “I would have been violating a duty,” or “It would not have been virtuous” doesn’t sound like moral answers—it sounds like egoistic considerations. Likewise, when considering the footbridge variation of the trolley problem, it wouldn’t make any difference to the fat man—nor any of the other people in the case—whether the reason he was run over is that he was pushed, or that the trolley was diverted to the track he was tied to. That means that the only way it could be wrong to push, but not to switch, is if you are caring about something other than the people involved.
In general, when a non-consequentialist explains why it recommends a different line of action than consequentialism, it can never cite the fact that it is what the subjects in the situation would or should care about. The subjects in a moral situation don’t care whether their suffering or death is the result of an action that’s vicious or violated a right—or at least if they did, consequentialism would take that into account. Rather they care what’s actually happening to them. Of course you can cite the interests of the person whose rights have been violated, but the rights-violation counts for more than the interests themselves, otherwise something like deontology would simply recommend the same action as consequentialism. And while you can cite the interests of the person being wronged, you should also cite the interests of all the others, and these will collectively point towards the consequentialist action. If the fat man on the footbridge fell onto the track himself, it would surely be wrong to pull him off, exactly because the other 5 have a strong interest in not being killed. But no subject in the situation (including the fat man himself) cares whether he was pulled off the track, or whether he was never pushed to begin.
Another perspective on this point is that you should always morally hope that everyone else acts in accordance with consequentialism, assuming good isn’t agent-relative—that is, as long as axiology is agent-neutral, even if normativity isn’t.3 Egoism is an example of a view where the good is agent-relative, since the good is what is good for me, where who “me” is varies from person to person. But something like threshold deontology will probably have agent-neutral axiology, and simply claim that maximizing the good can have agent-relative constraints.
I mean, what does it mean for something to be good? To me it seems like it’s synonymous with being valuable. Additionally it seems like you should X-hope for something just in case it is X-valuable. For example you should prudentially hope that you win the lottery just in case it would be prudentially valuable/good for you if that happened. But if this is right then that just straightforwardly means that you should always morally hope that moral value is promoted, which just is to say that you should hope that everyone else acts in accordance with consequentialism.
While it would ultimately be wrong for you (on, say, deontology) to push the fat man off of the bridge to save the five, you should still morally hope that he trips all by himself and falls on the track, stopping the trolley by accident. Consequently you should hope that someone less principled than you comes around and pushes him off the bridge for you. You should of course deem their action very wrong and condone it, because they’ve been a very bad boy! Nevertheless it was better that it happened, and so you should be morally glad that it did happen!
Apart from presenting a weird sort of tension in non-consequentialist views, this also just strengthens my feeling that these get morality wrong in a way consequentialism doesn’t. When you should hope that someone acts wrongly and does the dirty work, then it just feels like your theory ends up being more about “rule worship” and keeping your hands and conscience clean, rather than actually doing what is right.
Considering these points also sort of deflates the “utility vessel” parody of consequentialism. You’ll often hear people complain that consequentialists ignore people and only care about the utility they can experience. But I think the above should make it clear that consequentialism stems exactly from caring about people, and their interests, rather than caring about an abstract global utility function. In fact, it seems that by not being a consequentialist you will be ignoring the people themselves in favor of other considerations to a much higher extent.
Making Morality Easy
Another thing I find appealing about consequentialism is how it gives easy answers to many otherwise difficult questions: What is the right theory of punishment? Whatever is best. How strong should property rights and free speech be? Whatever level is best. Under which conditions is consent legitimate? Who cares?! just do what’s best!
This of course doesn’t mean that morality is actually easy on consequentialism. There are many complicated issues around figuring out what things lead to what consequences, and how we should value those consequences in reality. On top of this, there are issues such as aggregation of value, future people, and desert, where consequentialism doesn’t always give a straightforward answer (though there are often answers that are more natural than others). Nevertheless consequentialism is undeniably extremely streamlined and straightforward on the theoretical level, compared to most, if not all, other views—and this is very appealing to me.
At the same time it can appear like something of an oversight of the theory. I mean, these sorts of issues really feel like they’re difficult. It can seem more like foolhardiness of the theory than explanatory power when it can decide many issues on momentary reflection, which surely require careful and nuanced consideration.
Desire, Pleasure and Objective Lists
As mentioned previously, there’s still the question of what is good. Fret no more; your questions will be answered here! There are generally three approaches to this4: Hedonism (good=pleasure), desire satisfaction (good=satisfaction of desires), and objective list (good=some list of different things). Hedonism seems like the most natural view here, and I think it might be the most historically popular (though I my evidence here is a gut feeling). There is also a lot of appeal to it: pleasure is obviously a good feeling, and pain a bad feeling, and simply identifying good and bad with these make for a simple and elegant theory, which captures what we want to capture. On top of that, it captures the idea that something can only be bad for me if I’m actually aware of it.
But I’m not very attracted to hedonism for a couple of reasons. Firstly, I don’t think that pleasure and pain are as simple and unified phenomena as the English language might make us think. When I compare the experiences of eating a Pb&J sandwich with having a good conversation with my girlfriend, there seems to be very little in common between the two. Nevertheless both are immensely pleasurable and certainly contribute to my well-being. So I’m just very skeptical that the word “pleasure” picks out a single phenomenon that we can base an axiology on. We could of course just say that pleasure is multiply instantiable property of many different types of mental states. Though when I consider all the different examples of pleasurable experiences, what seems to be in common between them is not some intrinsic property of them, but rather just that I want them to keep on going, or something like that. This also makes better sense of cases where someone enjoys a painful experience. For example, I might actively enjoy the pain of working out, because I know that it means that it’s working or something (idk, I hate working out)—not just wish to work through the pain, but actively desire to have it. Had I had that same experience while sitting here writing, I would very much not enjoy it. Likewise, I might enjoy having hot wax dripped on me in certain unspecified contexts, even though that is a decidedly painful experience. All of this seems to make sense if it’s the desire for an experience that matters.
And especially importantly, it just seems like things can actually be bad for me, even if I’m not aware of them. For example, if everyone around me were philosophical zombies, that would be bad for me, because I want to have relationships with other conscious beings; and if I feed a Muslim haram meat without them knowing, or throw an old man’s carefully crafted testament in the trash right after he dies, it seems like I’ve harmed these people, even though they are not aware of it.
Mainly though, it’s not hedonism; it’s me—I’m in love with someone else: desire satisfaction theory. This is the view that what’s good for someone is having their desires satisfied, and what’s bad is having them thwarted. As should be clear from the above two paragraphs, I just think this elegantly captures the intuitions I have about well-being, and a desire seems to have a much better shot at being a genuine unified phenomenon than pleasure.
I think it’s obvious that it shouldn’t just be present desires that count, but all desires that you have. For example, suppose (per impossible) that you currently want to watch YouTube rather than read Wonder and Aporia. If you were to read Wonder and Aporia, that would obviously make you super smart and sexy, letting you satisfy other desires, such as becoming a tenured professor and having loads of hot sex later down the line. For that reason, you should obviously keep reading Wonder and Aporia (and subscribe), even though that thwarts your current desire.
Now, I don’t like something like informed desire theory, where the desires that matter are those that I would have under full information. Firstly, it’s not obvious to me that there is a fact of the matter as to what desires I would have. Secondly, whether or not it’s good for me to eat an ice cream or have a good friend doesn’t seem to depend on whether I would want those things in a different possible world, but only on what my actual desires are. Thirdly, it doesn’t seem particularly strange to me that if someone genuinely only cared about counting blades of grass, and couldn’t have stronger desires for other things by changing their ways, then that would really be the best thing for them. Lastly, if we start making restrictions on which desires to count and which not to, then it just seems like it’s because we have another axiology at the back of our minds, which is really what is guiding our judgements about different desires.
This leads nicely into objective list theories. I don’t really have much against these views, other than that they just seem unnecessary, as desire satisfaction captures what needs to be captured, and does so in a simpler way than objective list theories. The biggest worry I have with the desire satisfaction view might be “unjust desires,” i.e., cases where people have desires for things that they shouldn’t. For example, if a pedophile is jerking off to some horrific child porn and really enjoying it, it seems like what’s happening isn’t particularly good—or at least not as good, as if it were some (let’s say) milder things they were watching. Desire satisfactionism doesn’t seem able to capture this intuition, but that suggests that there’s something wrong with the view. This pushes me more in the direction of objective list theory. Apart from that, I think there might be some problems with implicit/latent desires, desires about the future etc., though overall I still tentatively lean towards desire satisfactionism.
Why I’m not Quite Sure
Still, I’m not fully comfortable calling myself a committed consequentialist. Maybe I just have to let these considerations settle in my mind, but I also feel certain things pulling me in the opposite direction.
Perhaps the biggest—and also the most common—is that consequentialism just seems to get certain cases very badly wrong. Especially compelling to me is the case of the electrician (I don’t remember where this is from (edit: it's from Scanlon (thank you Amos))): Basically, an electrician is fixing the mast of a tv-station in stormy weather, which is broadcasting a football game to millions of people. During the final dramatic minutes of the game, lightning strikes the backup antenna, destroying it, meaning the game will no longer be broadcasted to the viewers. Due to the amount of viewers, and how strongly they want to see the game, you know that they will all be so infuriated if the broadcast stops that the badness will outweigh the positive value the electrician could by any realistic measure hope to have for the rest of her life. Because of this, you decide to turn on the main antenna. This painfully electrocutes the electrician to death, but her body is conductive enough to complete the circuit, and you can let her be electrocuted for the rest of the game.
It just seems obviously wrong to me that you would be doing something horrible here, not something right! The good old organ transplant case gives me a similarly extreme reaction; it just can’t be right to steal those organs! Though I think I might to some degree be able to weaken these intuitions. With the electrician case, I think it might just be that I have a hard time understanding the numbers involved. It’s not just that it seems wrong for me to turn on the antenna, but it also seems like it would be a bad thing if I did it—I shouldn’t hope that someone else does it. This suggest to me that I’m simply not able to fully comprehend the facts that I have stipulated into the case, which seems more like a problem with me than with the theory. Alternatively it might suggest that the axiology I’m working with is wrong, but not that consequentialism itself is.
As for cases like the transplant one, my intuitions are generally weakened the more I really consider the case. Specifically, when I consider myself being each of the people involved, and how much I would hope I got an organ; as well as when I consider the distress of the families of each of the people who will die if the organs are not transplanted, etc., etc., it all makes it feel less obvious that it would be wrong.
Additionally, it seems like you can make a sort of error theory for these intuitions: people are generally terrible at figuring out the consequences of their actions, and so you will do a better job if you just follow simple rules of thumb. You can fuck things up really badly if you go around murdering people for the “greater good,” and you’re unlikely to actually do anything particularly good with it. All that being said, I still can’t shake the very strong intuition that these cases are just obviously horribly wrong, which makes me reluctant to fully lean into consequentialism.
Another thing that I think consequentialism gets wrong is supererogation. It just appears false that you’re acting wrongly if you’re not acting wholly optimally all the time—surely it’s permissible to sometimes be suboptimal. I think that people should donate much of their money to effective charities. Nevertheless, it’s surely still permissible to sometimes buy a bag of coffee, or to pursue the job you want, even though that hurts your ability to donate effectively. But this goes strictly against the idea of agent-neutrality I emphasized earlier, as you’re favoring your own interests over those of others.
Though the type of agent-relativity supererogation presents doesn’t seem as objectionable to me as other versions. The reason is that the claim isn’t that it’s right to be suboptimal—everyone understands that suboptimality means not caring as much about moral subjects as you might. Rather the claim is just that it’s not strictly speaking wrong or impermissible to be suboptimal. This might suggest that we should think of supererogation, not as part of our moral theory, but as part of countervailing prudential considerations. Combining this with the caveat I added at the start that rightness is a spectrum might make it all things considered permissible to act suboptimally, even if it’s strictly speaking morally wrong. To some degree this just feels like a fancy wordplaying way of saying that you can be egoistic, though, and doesn’t capture the idea that it isn’t morally wrong to be suboptimal sometimes. And to the extent that consequentialism can’t capture this, I think it’s a strike against the view.
Conclusion
While I have obviously been giving arguments throughout this post, this is really intended more as a report of my views on these issues taking you through my thinking process, than a case for some conclusion. As said, there are plenty of arguments to be had both ways, but I hope these more broad considerations are able to give something that a straightforward argument-for-argument case doesn’t.
I’m still not sure where I stand. I guess that in whatever sense a normative ethical theory might be said to be correct, I’m around 60% sure that consequentialism is it, though I don’t think you can really give exact numbers to these things.
You Might Also Like:
Self-Defeat Is not a Problem for Agent-Relative Morality
It’s no secret that agent-relative normative theories are self-defeating under certain circumstances: When everyone successfully follow them, it makes everyone worse off. But while it is clear that they are self defeating, it is not quite as clear what to make of this fact.
Contra Bentham's Bulldog on Moral Risk
In a recent post, fellow blogger Bentham’s Bulldog argued that even a utilitarian should worry about animals having rights, since even utilitarians aren’t certain that deontology is false, and so this gives us extra reason to be vegans. This is also something he has argued for
The title says “utilitarian” because I suspect that’s more eye-catching.
A small detail: you might define it in terms of the action that actually produces more good, rather than more good in expectation. I choose the latter because I don’t think it makes sense to say someone acted wrongly if they did what was expected to be best, but which happened to be bad, or vice versa.
In fact, even if value is agent-relative, you should hope that everyone else acts like a consequentialist, just for your value, not their own. For example, if you think it’s worse if your family is hurt than if other’s families are, you should simply morally hope that others abandon what it would be right for them to do, and instead act as consequentialists for your family. I just think this speaks to the implausibility of agent-relative moral value, though.
I will here be focusing on well-being (what is good for subjects), though you might think things like beauty have apersonal value
Yes, Good news! I believe the counterexamples you mention can be defused by considering the second order effect of actions in a real world setting.
> The good old organ transplant case gives me a similarly extreme reaction; it just can’t be right to steal those organs!
These types of arguments (surgeon, sheriff...) were never convincing to me, since we need social trust for a society to function. The public's fear of being kidnapped if you get too close to a hospital would have disastrous consequences, far worse than whatever we gain from kidnapping people.
Instead, I have found that the most convincing arguments are the ones that tackle the "maximization" or "expected", parts of maximizing expected utility. For example: the 'repugnant conclusion', 'pascal's mugging', or my very own 'wagering calamity': https://bobjacobs.substack.com/p/the-wagering-calamity-objection-to