I think average utilitarianism is a strong candidate for the worst view in population ethics (perhaps barely losing to torture-maximizing consequentialism)—it has little to nothing going for it, and has very big and obvious problems without clear solutions. The impetus for my writing this is this short piece by J. Mikael Olsson, where he argues in favor of the view, as well as a discussion we had in the comments. Now, while I think that his position on this point is extremely wrong, he seems like a smart guy, and writes short, interesting pieces, so I definitely recommend checking out his blog and subscribing!
This post ended up being pretty long, and it would certainly be very long if I included all the points I’m intending to. For this reason I’ll do a little salami slicing, and split it into two parts (as the acute of you might have guessed from the title).
While utilitarians (and consequentialists generally) obviously have the biggest hard-on for consequences, most (probably all) plausible ethical theories care to at least some degree about consequences. So I think this is relevant for everyone, as there will have to be some theory of what constitutes good consequences. I will here be contrasting average utilitarianism (AU) with total utilitarianism (TU). Total utilitarianism is roughly the view that the value of a world (or state of affairs or whatever) is equal to the total amount of utility in that world—whatever utility is cashed out in terms of. Average utilitarianism, on the other hand, can be characterized in at least three different ways:1
The value of a life is the sum-total utility of that life, and the value of a world is the average value of the lives in that world.
The value of a world is the average value of the person-time-slices of that world.
The value of a life is the average value of the moments in that life, and the value of a world is the average value of the lives in that world.
These are of course only axiological claims, but seeing as both TU and AU are consequentialist theories (at least taken on their own), this also gives us an account of right action, since consequentialist theories simply claim that the right action is the one that brings about the best state of affairs.2
Why Be an Average-Utilitarian?
There are essentially 2 arguments in favor of AU (or at least for AU being more plausible than TU): It seems like we should value the utility of people, rather than valuing the abstract utility function of the world; and the repugnant conclusion. I find neither of these particularly convincing.
Let’s start with the repugnant conclusion, as I am unpredictable and don’t care that I mentioned it last in the preceding paragraph! If you’re one of those total losers who has no idea what population ethics is all about, first off: sigh!3 But secondly: the repugnant conclusion is essentially the idea that for any possible world containing a population of extremely happy people, there is another possible world, which is at least as good, consisting of a much larger amount of people with lives barely worth living. This follows from pretty pretty innocuous and plausible assumptions—and more importantly for our purposes, it follows straightforwardly from TU, but not from AU.
Judging from the name, you can probably guess what Parfit’s (the guy who first discovered the conclusion (afaik)) attitude towards it was, and he is not alone. I do also find it pretty implausible on the face of it, though I don’t think it is as implausible upon further reflection. First off, I think we are easily misled when hearing the words “barely worth living”. If you’re like me, you probably imagine people in extreme poverty, fighting for their next meal; or some poor soul who thinks coffee tastes bad—oh the horror! But these are not, on utilitarian considerations, lives worth living, but rather decidedly terrible lives. For a life to be barely worth living, its pain and pleasure4 will have to be balanced, with a slight tip in favor of being positive, meaning it is very close to neutral. Try to imagine the last time you were in a neutral state—that means no hunger, no need to pee, no back pain, fully caffeinated, etc.—it’s actually not bad at all (the particularly quick among you may have realized that that is part of the meaning of “neutral”). In fact, it’s very rare that there is nothing at all bothering you, not even an itch or a stone in your shoe. A life barely worth living is equivalent to this perfectly neutral state all throughout, and then with a small bonus at some point—perhaps the pleasure of subscribing to Wonder and Aporia:
It’s far from clear that many of the lives lived by people today, even in the first world, are this good on purely hedonistic grounds; 20 years of chronic back pain requires a whole lot of tasty falafels and reading my blog to outweigh (upwards of 3 articles!). In fact, I doubt that most of the moments I am awake throughout a day, taken by themselves, are worth living in purely hedonistic terms. Even disregarding whether that’s the case, it should at least be very clear that a life barely worth living is, well, worth living. This strongly weakens my intuitive judgement against the repugnant conclusion—especially when combined with the fact that it is very hard to comprehend large numbers (if asked to imagine 1 million people, it probably looks about the same as your imagining 1 billion people, even though the latter is literally 1000 times greater. Hell, you can probably barely imagine 1000 people, much less the experiences of all these).
It should then also be clear how many resources it actually takes to sustain a life worth living. You need food, drinks (including coffee, of course), clothes, housing, entertainment, even friends (if you’re one of those types). That means that utilitarians shouldn’t go around shooting out babies willy-nilly, as that would probably lead to lives not worth living. Instead, the best way to optimize happiness is probably by having only a few children. This, then, plausibly explains some of the instinctual reaction we have against the repugnant conclusion too: it is almost always better in practice to create few very good lives than to create many very slightly good lives, as there is an upfront resource-cost to reaching neutrality in a life, before you can begin to “add on” positive goodness.
Finally, any of the alternatives to the repugnant conclusion have much more “repugnant” implications than this—including AU, as I will attempt to draw out here and in the next post. Furthermore, I also just think that the repugnant conclusion is more plausible than the negation of any of the assumptions necessary to establish it. Michael Huemer has a nice paper arguing to this effect.
Now as for the intuitive appeal of AU over TU, I can kind of sympathize with it. After all, it looks like TU cares little about people—rather, it cares about the abstract sum-total of utility, and people are mere vessels for this. There may be something to this, but it is important to remember that the reason people care about utility is that it is good for people to have it, and so we should make sure that as many people get to enjoy as much of it as possible. And besides, once we look at some of the implications AU has, it suddenly looks much more like AU actually doesn’t care very much for people either; AU doesn’t care that people enjoy as much utility as possible—instead it cares about allowing the lucky few who get to exist to have as much of it as possible, and not allowing any more people to exist if that is a hinderance for this. This may be a bit of an uncharitable way to characterize the view, but I think it underlines how weak of an appeal this is in favor of AU, when you can so easily get into a headspace of seeing AU as being as callous and uncaring as TU (if not even more so).
I think that one way to make the intuitive appeal of TU over AU clearer is to imagine a modified version of something like an original position. Now, you don’t need to be a Rawlsian or contractarian to appreciate this point, it is simply meant as a sort of intuition pump. Usually with the original position, we imagine that people are choosing principles about how to structure society from behind a veil of ignorance, where they don’t know anything about themselves, the contingencies of society, etc. But usually we imagine that all the parties in the original position already know that they will be born. It’s not clear to me that this is the best way to think about it. Imagine instead that you are behind the veil of ignorance, and excluded from your knowledge is also the knowledge of whether you will be (or have been) born after the veil is lifted. What sort of principles should you hope people come to agree to? It looks very clear to me that we should at least hope for TU over AU, as while the people under AU might be happier, I would have to be very lucky to be one of them. This seems the more correct way to conceive of an original position to me. Any of us could surely have not been born, and if our lives are worth living, we are lucky to have been so. By only focusing on the people who are sure they will be born, we are blind to the contingency of our existence and don’t appreciate how lucky we are to be here to experience it.
Alternatively, we can imagine living through every life that will ever exist. The better world would presumably be the one you would prefer to do this with, or which it would be better for you to do this with. But then it’s just completely clear to me that total utilitarianism wins: Would you prefer to live two years of bliss, or to live two years of bliss and then one year of bliss? Obviously the last! The motivation of average utilitarianism is that it is better for the people who actually exist, but I think this is just “existential snobbery”, like how being selfish is just “personal snobbery”. An experience is no less worth because it hasn't yet come about, just like it isn't less worth just because it happens to someone other than you. Taking the universal perspective I suggest makes this much more clear, and makes it apparent that average utilitarianism just seemingly arbitrarily (and perhaps almost egoistically) restricts its scope to those (un)fortunate enough to have been born.
So I think the motivation in favor of AU is very weak. But even if the motivation was very strong, I still think we should not subscribe to AU due to the extremely implausible consequences it has.
Should Average Utilitarians Kill Themselves? [THIS IS NOT AN ENCOURAGEMENT TO DO SO!!!]
But seriously, should they, according to their own theory? Here the tripartite distinction from above becomes important:
The value of a life is the sum-total utility of that life, and the value of a world is the average value of the lives in that world.
The value of a world is the average value of the person-time-slices of that world.
The value of a life is the average value of the moments in that life, and the value of a world is the average value of the lives in that world.
1 doesn’t tell you to kill yourself in any cases where TU wouldn’t tell you to do the same (i.e. when your life will be worse than nothing in the future), but 2 and 3 do. On 3 you should kill yourself if you expect that the rest of your life will be worse than the life you have lived so far. This means that it recommends we kill ourselves when we reach about 70 and our happiness begins to decline (or perhaps even in our 20’s, as life-satisfaction generally tends to decline here, reaching a low-point in the late 40’s before climbing again). But that just seems absurd to me! Surely I should want to continue living for as long as my life is… well, worth living. It doesn't matter whether the rest of my life will be on average worse than what I have previously lived, if it is still good; if I receive the Nobel-Prize, marry my now-girlfriend, and have the best ice cream ever, at an early age, I shouldn't then immediately kill myself because the rest of my life will not be as good as it has been so far, even if still very good.
And things are even worse for theory 2. Here you don’t even have to bother reflecting on your own life so far, as you should kill yourself if you are worse off than the average person. Does the neighbor have a nicer car than you? Well, there is only one way out of the situation, and I’m not sure you’re gonna like it! That’s of course absolutely bonkers!! There is no way that my reasons for going on living could in any way depend on whether someone on the other side of the globe (or even on another planet) has a sweet wife and a lot of subscribers on their Substack; the reason I have to keep on living is that my life is good.
And it’s not just that you should kill yourself on these views—why not do the right thing and help others along? See someone who is miserable after losing their job? Better cut the losses and slip a little extra something *wink, wink* into their morning coffee. Does someone have it worse than you? So much worse for them—they can’t just go around ruining the average utility function for the rest of us! If you haven’t caught on, I think this is absolutely unacceptable consequences, and any moral theory that has this result should not even be considered an option!5
All of this may of course be taken as evidence in favor of 1, and it is surely also the most plausible version of AU for the reasons listed above. But this means that the only even remotely salvageable version of AU is one that relies on TU for individual lives. I think it should be quite clear that there is at least a very strong prima facie tension in this. If I shouldn’t count the pleasure of people who only potentially exist, why should I count the pleasure of person-moments that only potentially exist? Suppose I have two options: A) live 10 years of 10 utility each, B) live 1 year of 90 utility. Here this version of AU should choose A. But we can now raise a very familiar sentiment: why should we care about merely potential person-moments? We should surely focus on improving the moments that we are sure we will have, rather than sacrificing utility for mere potentialities! This of course mirrors the original motivation for AU, and it’s hard for me to see why there’s a principled reason for potentiality-snobbery with persons while caring about potential moments. It just appears very strange to me that we should have stronger reason to realize a potential good period of life, only because it is appended to another period of life, rather than independent. This is maybe not a very different sentiment from the one I raised in the previous section, but I hope raising it in this connection helps make the appeal of it clearer.
This version of AU of course also leads to the repugnant conclusion for individuals: for any finite extremely good life, there will always be a life at least as good, consisting only of extremely many moments barely worth living. It seems like much of the intuitive repugnance of the original repugnant conclusion is preserved here, but AU has to swallow it, unless they would rather begin chopping up poor people (and people who don’t like coffee). That should at least make you reconsider how big of a cost the original repugnant conclusion is, since you are already committed to one version of it.
These last considerations are not knock-down arguments against this version of AU, but there are naturally also several other problems with it, which I will be raising in the next post. So if you thought this post was good (or even just barely worth reading), then I'm sure the next one will blow your proverbial socks off! And if not, then it's too late—you already read it 😈!! In any case, subscribe so you don’t miss it (you will also increase average utility, as I will be very happy):
I owe this distinction to Michael Pressman’s 2015 paper “A Defence of Average Utilitarianism”, which I also draw on at some other points in this post and the next.
Non-consequentialist theories that include considerations of consequences will of course not accept this last move of translating goodness directly into rightness of action. But as we are here concerned with showing how bad AU is, that is besides the point.
🙄
This should of course strictly speaking be “positive and negative utility”, since we need to take non-hedonistic versions into account, but that’s just too clunky to write—sue me!
I think I should also mention J. Mikael Olsson’s response to this, since his post was the original inspiration. As I understood him (he can of course correct me here), his response was that we should only count people who exist in all options we are considering. So we shouldn’t consider the person who will be killed, and so we have no reason to snuff them out.
I think there are many problems with this, but the biggest/most obvious one is probably something like this:
Suppose you have two buttons, A and B. By pressing A, you create a person who will suffer in excruciating hellfire for a billion years. By pressing B, you create a different person who will live in perfect bliss for a billion years.
On Olsson’s view, we should here be neutral between the two options (if you got an old lolipop for pressing A, you would have more reason to do so), which is patently absurd! Moreover, suppose you do press A. Then at the exact moment you press the button, you suddenly get an extremely strong preference that you had not pressed the button. Worse yet, you knew that you would end up having this preference before pressing the button, but despite this you had no reason to not press it. That is even more absurd, I think.
The "only consider people in all options we're considering" response gets worse. If it's meant to respond to the claims that we should kill existing people on average utilitarianism, then that means we no longer consider people to "exist" once they're dead. So whenever one of our options is to kill someone, we should not consider that person in our moral calculus. In other words, this view implies that murder is never wrong!
There seems to be a sort of isomorphism between how one chooses actions under TU vs AU and how one favors existence theories under SIA vs SSA. In TU and SIA, we are looking for higher totals (whether more utility or more people), and in AU and SSA we are looking for higher proportions (whether populations with a higher proportion of utility or "me-ness"). It is as if we have the analogy, "Utility is to TU/AU as Self is to SIA/SSA". Apologies if this seems obvious and trivial, as I have only recently become interested in "autistic philosophy"