While this is a sequel to my previous post, reading that post is not required in order to grow your brain by reading this one (although doing so is of course strongly encouraged—along with subscribing, liking all my posts, and writing me into your will). To fill the lazy of you in, last time we established—without a shadow of a doubt—that the positive arguments in favor of average utilitarianism (cool kids abbreviate it AU) are very weak, and that two of the three most obvious ways of formulating the view have the consequence that you should kill yourself if you’re poor or if your life is not as good as it used to be (roughly), as well as doing others a favor and helping them to meet the same fate. Total utilitarianism (TU) doesn’t have these consequences. The last of the three versions of AU left, then, holds that:
The value of a life is the sum-total utility of that life, and the value of a world is the average value of the lives in that world.
And taking it by itself, it also claims that the right action is the one that brings about the most valuable world, seeing as it’s utilitarian (though it can be combined with other principles, such as prima facie rights and duties). Now that everyone is on the same page, let’s see why this theory too is completely unacceptable.
When Should You Do the Dirty?
AU and TU are equivalent theories in cases with the same number of people, so the only way of adjudicating between the two is in cases containing different numbers of people. There are two ways to bring about different numbers of people: killing people and bringing new people into existence (let me know if you know a third way). Since both TU and this version of AU value individual lives in the same way, they agree on when you should kill people (excluding external circumstances). So the only types of actions where the two differ are actions involving bringing new people into existence. The natural question, then, is of course which theory has the best account of when you should bring new people into existence.
TU answers: you should bring a new person into existence when it would be better for that person to exist than not (all else being equal). AU answers: well… umm… ehh… so, I guess… (all else being equal). Ok, this is of course a bit of an uncharitable way of putting it, AU really says: you should bring a new person into existence when that person will have a life that is at least as good as the average life. The problem here is that this requires you to know how good the average life is. I think it should be very obvious that that is not a very easy thing to find out.1
The problem here is twofold: first you have to determine the lives you take into account—is it all lives ever, current lives, current and future lives, current and past lives, human lives, mammal lives, all conscious existences (does God, if he exists, count here?), etc.; secondly, after having decided on a scope, you have to actually determine the average quality of all lives within that scope to be able to make a decision. But it’s not just this. Beyond there being an epistemological problem, there is also just the problem that AU gives absurd answers as to whether you should have a child, even when we stipulate the descriptive facts.
Suppose you choose the most natural scope: all lives of all time. This leads to the Egyptology problem:
You are sure that you would be a very good parent, and if you had a child, it would have a very good life—very much worth living. Furthermore, it would not negatively affect anyone else, it might even somewhat positively affect people, since everyone would be so happy to see that cute wittle baby, who will grow up to be nice to everyone! Suppose you know all this. Just before you are about to do the dirty work with your partner, you see the frontpage of the latest edition of Popular-Science-Magazine™: “Egyptologists Find Hitherto Undiscovered Civilization - You Will not Believe how Happy They Were!” Turns out they were very happy—so happy in fact, that everything points to the average happiness of history being way greater than that of your to-be child. Being the good average-utilitarian that you are, you slap on your chastity belt, pray for God to relieve you of your lustful thoughts, and refrain from having a child.
This seems like an absolutely insane reaction to that information! Why should the amount of tasty [Egyptian food] the average Egyptian had 5000 years ago in any way affect your reason for having a child now? It has literally no influence on the quality of your child’s, or anyone’s life, so it just makes no sense that it should give you any less reason to have a child!
Likewise, suppose that the excavation gives very strong evidence that the Egyptians had horrible lives—much worse than not living at all. In that case, it might be better if you had a child with a life worse than not living, than if you had no child, since it would still be better than the overall average. But that just seems horrible! Surely it is not good to create a person with a life worse than not living—one where they should wish to die all throughout, for example—just because it would not be as terrible as the average life.
To avoid this, we may restrict the scope to exclude past lives. But this just kicks the can down the road. Suppose that instead of reading about Egyptians in the far past, you read about aliens in a galaxy far, far away—also with extraordinarily fabulous lives! Here again AU tells you to pop your anti-Viagra pill and take a long cold shower to decrease the size of your member (if you are so-endowed), so you don’t create a (not so) happy little accident with your partner. Again, this seems completely unreasonable! News about the quality of life of aliens who you will never interact with should just not have any effect on you sex-life (unless you are into that stuff).
One tempting solution here might be to change the scope of the theory such that it only includes people who you are going to affect or interact with, or something to that effect. This too looks completely implausible, though, as it gives you strong reason to only interact with people who live very long happy lives, which obviously makes no sense—you are not acting more rightly by only interacting with people who are already happy.
It may be that I’m just too stupid to think of a better way to define the scope of the theory, but it really looks to me like no way of drawing a line results in anything tenable. Even assuming that all of these complications are not problems, however, I just find the very underlying idea completely implausible. Why should the goodness of bringing about a new existence depend on the quality of other lives? Surely the only relevant criterion for whether it is good to bring someone into existence (all else being equal) is how good it would be for them.
Benign addition
Another problem for AU is that it rejects that the principle of benign addition is true, which it clearly is! “What is benign addition?”, I hear you asking. Well, for God’s sake, don’t you think I was just about to tell you that?!
Anyway, benign addition comes from the Huemer paper I linked in my previous post.2 It’s basically the idea that of worlds A and B, B is at least as good as A:
A: 10, x, x, x
B: 15, 5, 5, 5
Here each column represents a person, the numbers represent the welfare-levels of a person in each world, and the x’s represent no person existing. Now in B the first person is better off than they were in world A, and each other person is gifted lives well worth living. Each person prefers world B to world A, no one is harmed—life good! Surely, if anything is plausible, it’s that it’s good to make everyone better off while making no one worse off (if only there was a name for this). But AU doesn’t accept this and instead judges B to be worse than A, since, the average utility of world A is greater than that of world B. But that just flies in the face of all my moral intuitions! Again, if anything is not bad, surely a pareto improvement falls under this! (at the very, very least in finite cases.) One way to weaken this intuition may be to consider this alternative:
A: 10, x, x, x
B: 5, 5, 5, 15
In this case, not everyone is made better off, since the first person has lower utility in B than in A, and so at least one person will object to this on egoistic grounds. This certainly makes the intuition that benign addition is permissible weaker—an average utilitarian will probably have the intuition that we shouldn’t here hurt an already existing person to benefit potential people. I think this just results from the AU-believer not having consistent judgements, however. Consider these two cases:
A: 10, 5, 5, 5
B: 15, 5, 5, 5
vs.
A: 10, 5, 5, 5
B: 5, 5, 5, 15
The first case looks analogous to the original benign addition and the second looks analogous to our modified version. Here I also think we are less comfortable with the second version than with the first and—by my lights—to the same degree as with benign addition. But both TU and AU tell us to treat these last two cases exactly the same. What is going on here looks to be that it’s our deontological intuitions that are firing: you ought not harm one person to benefit another. AU doesn’t care about that though! Instead AU cares about maximizing the average utility function. It is simply illegitimate to use the intuition as an argument against benign addition, when the theory doesn’t actually care about it at all!
Now, it might of course turn out that we care about deontological constraints or something like it, but that is orthogonal to the dispute here—we are looking at what AU judges to be the best world, that is, at the axiological question of which world is better, rather than the normative question of what world we ought to bring about. And so while both AU and TU for axiology might be combined with deontological constraints, considerations about such constraints should play no role in adjudicating between the two. Also, the point is not necessarily that the second version in both cases isn’t worse to bring about, but rather that our intuition that it’s worse should not make us any less in favor of benign addition—and consequently no less hostile to AU.
Two Hells
Perhaps the most popular objection to AU is the twin hells objection—and for good reason. It basically goes like this:
We have two worlds, A and B (as always).
World A contains a single person suffering in hellfire (burned alive, skin ripped off, toes put in a blender, no coffee break—you know the drill) for 1 billion years.
World B contains 1 trillion people suffering the exact same fate, except for one, who has a 1 second break at some point.
All the people here wish at every moment that they could just die and be over with it, and seriously regret having come into existence. Now, TU tells us that world B is far, far, far worse than world A, where AU tells us that world A is slightly worse than world B. I think TU just straightforwardly gets the right answer here!
To make this more visceral, imagine that you are the person in A, suffering immensely. One day, as a demon is peeling off your toenails with a rusty butter knife, you glance past a mountain that you have never looked behind before, and—*gasp*—it turns out you were never in world A after all. Instead you find 999999999999 other people being tortured, meaning you were in B all this time. As you gaze across the landscape, soaking in the pure horror of it all, you catch a glimpse of the one lucky person having a second-long break from the torture, before having her eyes pulled out with a pair of tweezers. Seeing this brief moment of not-quite-as-badness, you breathe a sight of relief!—what you initially thought was a horrific discovery turns out to have been a blessing, and the world is in fact not as bad as you thought just a day before.
I hope I am not alone in thinking that this is a completely misguided attitude to have towards this discovery. For all your existence, you have had no greater wish than to be snuffed out so that you could stop experiencing this excruciating suffering—the last thing you should be is relieved to find out that there are billions upon billions suffering the exact same fate as you, each (with the very slight exception of one) having just as strong a wish to shuffle off this mortal coil as you. None of the people in the scenario could care less about the average utility, and literally everyone would prefer that the average utility was lower. What consolation is it to you, as you are wishing to be dead, that the value of the average utility function is higher than it would in the alternative case?
The same might be said about TU—what consolation is it to you, as the trolley is hurdling towards you, that the total utility function has a higher value in this case than in the alternative? But with TU there is at least someone who is consoled, namely the five people who get to live to see another day. While you don’t benefit prudentially for it, you can still recognize that someone does—that it’s for the greater good. With AU not even this is the case. The lucky person who has had a 1 second break, will not, as they are lying there, branding iron in their ass, be thinking “Thank God that I exist! Now the average utility function is higher!”, no they’ll be thinking “if only someone would have the mercy of killing me as quickly as possible—it would have been better had I never been”. And this is the person who is supposed to be the lucky one, the person who got the long end of the stick, and the one for whose sake all this suffering is justified on AU—and even this person wishes for world A.
I think AU is the sort of theory that sounds nice and plausible enough the first time you hear it. Why shouldn’t we after all try to make people as well off as possible; to ignore real people’s needs for the needs of potential ones’ is surely immoral. But while TU is often accused of being the theory that worships abstract functions rather than people, I think the examples from this post and the last have drawn out how clearly it is AU that only cares about utility functions. To again and again prefer the option that literally no one in the whole world would prefer, just so some function can reach its maximum value is surely the epitome of not caring about people! So the notion that it’s AU that really cares about the interests of people is just completely misguided! This makes it look to me like there is actually no good reason to favor AU, and tons and tons of very strong reasons against it. This is why, as I said in the last part, I think that AU is a strong contender for the worst theory of population ethics, and I don’t think anyone should consider it a viable option.
You may of course also argue that TU requires you to know how good the life will be, which is in itself very hard. This is certainly true, but AU has this problem as well as the problem of knowing the average utility of the universe.
I won’t link it here, since I am evil and self-centered (among other things). So look at my previous post to find it (or google “In Defence of Repugnance”, if you don’t care about my feelings).
In section WSYDTD you miss what seems to me to be the most obvious option: compare the future with or without this putative person. If this child raises the AU of future folks, then it's going to raise the AU of all folks /relative to what it would be without it/. And if it lowers future AU then it lowers ditto. If it leaves the future AU unchanged then have it only if the future AU is higher than the past AU.
Still impossible to figure out, but not obviously morally wrong.
What I don't get about these utilitarian calculations is how they treat (or don't treat) the effects of acclimatization. For example, in the Twin Hells experiment, if the supposedly "noxious" nonstop stimuli were all you ever knew, then in what sense could it be noxious at all? It seems it would just be "information". If so, then it doesn't seem obvious that momentarily discontinuing the stimuli--if it could even be considered stimuli, given its nonstop/constant presence--would be seen as an improvement; it seems more like it would reduce to being roughly equivalent to blinking.
In horror movies, the first couple "jump scares" can be fun, but if jump scares are all the movie has to offer, then it becomes something of a chore to remain engaged