In section WSYDTD you miss what seems to me to be the most obvious option: compare the future with or without this putative person. If this child raises the AU of future folks, then it's going to raise the AU of all folks /relative to what it would be without it/. And if it lowers future AU then it lowers ditto. If it leaves the future AU unchanged then have it only if the future AU is higher than the past AU.
Still impossible to figure out, but not obviously morally wrong.
I actually mention something like this in a footnote to part 1 (understandable if you missed it). I am lazy, so I'll just copy what I wrote there:
The biggest/most obvious one is probably something like this:
Suppose you have two buttons, A and B. By pressing A, you create a person who will suffer in excruciating hellfire for a billion years. By pressing B, you create a different person who will live in perfect bliss for a billion years. [And suppose they will affect the welfare no one in either case].
On [this] view, we should here be neutral between the two options (if you got an old lolipop for pressing A, you would have more reason to do so), which is patently absurd! Moreover, suppose you do press A. Then at the exact moment you press the button, you suddenly get an extremely strong preference that you had not pressed the button. Worse yet, you knew that you would end up having this preference before pressing the button, but despite this you had no reason to not press it. That is even more absurd, I think.
That only applies to the versions where only people who exist in every alternative get counted. Straight AU of people in each world works with what I described.
I may be misunderstanding your proposal. If you mean that we should *not* include the person who is about to be created in our considerations, then we get the problem I just described, it seems.
But if you *do* count that person, then you will surely be right in creating them IFF they will have average or above average welfare: in the world where they don't exist, the average utility will be X. Unless they have X or more utility in their life, they will lower average utility (remember we stipulated that they affect no other person), and if they have more, they will raise it, meaning we should create them. But that just runs straight into the problems from the initial article.
I assume I am missing something, so it might help if you give an example of how you would figure out whether to create a given person--showing your calculation work, if you will :)
I wrote: > To be clear, I'm talking about the section where the AU proponent is deciding whether to have a child. <
To be even more clear, in THAT section there was no assumption that the child would have no effect on anyone else -- nor should there be such an assumption.
To be clear, I'm talking about the section where the AU proponent is deciding whether to have a child.
Also, you provided three different ways of calculating the average. I'm just going to go with option 1: (total utility of all relevant people) divided by (total number of relevant people). The math is the same for all three versions, but the words used to describe the numbers differ.
First let's do the math for your A/B example -- to show that you got that one right, and to demonstrate the way the math works.
In your A/B example, we can push button A and create person 'a' who suffers enormously, or we can push button B and create person 'b' who anti-suffers enormously. In your example, the only other difference is that if you push button A you get a tiny lollipop -- but there are no other changes to anyone's utility. Let's set the amount of positive utility for the lollipop to 0.01, the total DIS-utility of 'a' to a gajillion, and the total utility of 'b' to a bazillion.
Let T be the total utility of all people other than a or b, and N be the number of people other than a and b.
- The average utility of the A-world is (T + 0.1 - 1 gajillion) / (N + 1).
- The average utility of the B-world is (T + 1 bazillion) / (N + 1).
Clearly the average utility of the A-world is MUCH worse than that of the B-world. That's why any NORMAL person would prefer the B-world.
But under J. Mikael Olsson's rule where only people in both worlds count, those are not the *important* averages. Under his rule, the numbers we're supposed to use ignore both a and b -- because each is in only one world. But everyone else is in both worlds, so he says we should compare:
- The *relevant* average utility of the A-world is (T + 0.01) / N.
- The *relevant* average utility of the B-world is T / N.
Thus under *his* rule, world A is preferred. Thus we conclude (as you did, only without the explicit math) that his view is deeply counter-intuitive!
So from here on in we ignore the rule that only people in both worlds count.
----
Now, you wrote:
> AU really says: you should bring a new person into existence when that person will have a life that is at least as good as the average life. <
That's not precisely correct. The precise claim is /You should bring a new person into existence when that person will raise the average utility above what it would be without that person./
You ignored the effect this new person would have on other lives. For example, the effect they have on the lives of their children and grandchildren. If the new person has a below-average life but has multiple descendants with above-average lives, then this person may still cause a general rise in the average utility.
So, let's look at how an AU couple should reason about having a child -- under the all-people-count rule.
The choice here is between having a child and not having a child. Note that there's no change to the past regardless of which choice they make, so they are deciding between two futures: F0 (no child) and F1 (a child). They want to choose the better future, by which they mean the future with the greater average utility. Let T0 and N0 be the total utility and number of people in future F0, and T1 and N1 be the corresponding values for future F1. The couple want to maximize the average utility of the future:
- the average utility of F0 is A0 = T0 / N0
- the average utility of F1 is A1 = T1 / N1
If A1 > A0, they choose to have a child; if A1 < A0 they choose not to; otherwise it's optional.
----
Now I will note a complication. I said that the past doesn't change, so the people are choosing a future and thus want to maximize its value. I think *that* is the rational way for the AU couple to decide.
But...
... if we include the past with the future (as you mostly did), then the calculation changes a bit. IF that's how the AU couple views the situation, then they'll want to consider the average over P + F0 versus the average over P + F1 (where P is the past up to their present). That is, they'll need to calculate
- the average utility of P+F0 = P0 = (T + T0) / (N + N0)
- the average utility of P+F1 = P1 = (T + T1) / (N + N1)
(where T and N are the total utility and number of people for the past).
These numbers can lead to a different decision than the earlier ones. For example, let
Here A1 > A0, but P0 > P1. Even tho' the future is better with a child than without, the total history of the world is better without the child. (F0 is "bigger" than F1, so its average, tho' smaller than F1's, does more to move the overall average.)
The math is the same if, instead of adding past to future, we add a distant (non-interacting) civilization. (T and N would then be the total utility and number of people in that distant civilization.)
I think (and I think you'll agree) that the distant, non-interacting civilization should have no bearing on our couple's choice.
I think (but I don't know whether you agree) that the past should have no bearing on our couple's choice.
In any case, like the three ways of calculating the average, these are differing options for the AU couple, and for AU reasoners in general.
I think your way of calculating was actually what I had in mind in the original article. I was simply very imprecise with my wording (which is on me, of course). I actually do think I offhandedly mention that the child will make others happy in the example, but I of course did very little to address that in more detail (again, my bad!). I definitely agree that AU should consider effects on other people too in procreative decisions (as should any theory), I was simply intending to focus on the person being created specifically, as that makes the reasoning easier, and the implausibility more apparent--but I think that the conclusions generalize to when you consider the effects on other people too.
For a simplified example, suppose that there exist 4 people with utility 10 each (very good lives), and they are considering whether to bring another person into existence. They have good reason to believe that this person would have a welfare level of 5 throughout their life (still pretty good), and would make the other four better off by 1 util each. We then get:
A0 = 40/4 = 10
A1 = (44 + 5)/5 = 9.8
This means that AU tells us not to bring about a happy person who positively affects everyone around them.
We can then imagine that these four people exist in a larger world with 96 people with equally good lives on average (10 utils). Assume that the created person would still only positively affect the four people with 1 util each. We then get:
A0 = 1000/100 = 10
A1 = 1009/101 ≈ 9.99
Again, we should not bring about a happy person who would only positively affect the world. Now suppose that the 96 other people are actually not quite as happy, with average utility of only 5 each. We then get:
A0 = 520/100 = 5.2
A1 = 529/101 ≈ 5.24
We now get the result that we *should* create the person, even though nothing about the person, nor their effect on others, changed. The only thing that changed was the average welfare of people who are in no way affected by the action (we may suppose that they live on the other side of the globe). This seems pretty absurd to me!
If you think that it's too implausible that the person would only ever affect 4 people, we can suppose that they will affect all people, making person 1 better off by 1 util, person 2 better off by 1/2 utils, person 3 by 1/4 utils and more generally person n by 1/n^2 utils. Thus the total positive impact would be close to 2 utils. This, by the way, seems (very, very roughly) like a pretty reasonable model for how we actually affect the whole world--we affect everyone, but the further we are from someone, our effect asymptotically approaches 0.
As an aside, I don't agree with you that we can't meaningfully focus on the individual in isolation and still get something meaningful out of it; it simply looks like AU gets things wrong in that it can be bad to create a person with a life very much worth living, or good to create a person with a decidedly terrible life, *all else being equal*. If we need an actual case, we can for example consider whether creating an infertile hermit with a life very much worth living--though below average for the world as a whole--would be good. TU answers yes (which seems correct) and AU answers no (which seems wrong). Likewise with a miserable hermit in an even more miserable world--here we should plausibly not create them. But this last part is not so important, and is really a minor methodological dispute, I think.
----
As for whether to consider past lives, I agree that these shouldn't count--I simply started with the Egyptology example because that is the most prevalent in the literature. I likewise agree that the welfare of distant civilizations shouldn't affect our procreative choices. But it seems much harder to avoid the latter on AU. After all, the alien civilization exist and will exist in the future. We may say that we should only count beings that we will interact with, but this has the very strange implication that we sometimes shouldn't interact with aliens, even if doing so would be better for everyone.
Suppose that the total future welfare of the aliens civilization (TA) is 50 and the population (NA) is 50. On earth, the total welfare (TE) is 100, and the population (NE) is also 50. Suppose that if we make contact with the aliens, both civilizations' total welfares increase by 10. We then get:
AN = 100/50 = 2
AC = (110 + 60)/(50 + 50) = 1.7
Where AN is the average given no contact, and AC is the average given contact. This would mean that we have strong reason not to interact with the aliens, even though doing so would be better for both civilizations. This seems very implausible to me, but the alternative of distant, undiscovered civilizations affecting the goodness of our procreative choices seems equally absurd. There may of course be a third option that avoids all these issues, but I have a very hard time seeing it (though I am of course open if you have any suggestions).
Furthermore it seems hard to define "interact" in a non-arbitray way that wouldn't also count photons from earth reaching the alien planet as interacting, which they presumably would have for many years.
This can of course also be relevant to procreative decisions--we can just translate your example (as you also suggest):
- TA = 50
- NA = 100
- TE0 = 200
- NE0 = 100
- TE1 = 115
- NE1 = 50
So in general it just looks like any version of AU will have the pretty absurd conclusion that our procreative decisions should depend on the welfare of outside people who are almost or completely unaffected by the decision (or accept that we shouldn't contact aliens, when doing so results in a pareto improvement).
> The only thing that changed was the average welfare of people who are in no way affected by the action (we may suppose that they live on the other side of the globe). This seems pretty absurd to me! <
I agree. I wasn't so much defending AU as encouraging better criticism -- which you have now provided. Hooray!
Having said that, tho', there is an error in your alien contact story. You say that AN and AC are the No-contact and Contact average utilities, but you use different rules for what utility to count. For AN you use only the Earth numbers, while for AC you use both Earth and Alien.
The AU would either be concerned about the aliens' utility, or not concerned.
- If they are concerned, then AN should include the aliens' numbers (TA and NA) along with the Earth numbers (TE and NE), giving AN = 1.5 (vs. your 2.0).
- if they are not concerned, then AC should include only the Earth numbers, giving AC = 2.2 (vs. your 1.7).
Either way, contact is preferred (1.5 vs 1.7 OR 2.0 vs 2.2) -- as should be expected by the contact adding 10 utiles at each location.
So that particular criticism fails, but given all the other criticism you've provided I don't think it's a problem for your anti-AU position.
What I don't get about these utilitarian calculations is how they treat (or don't treat) the effects of acclimatization. For example, in the Twin Hells experiment, if the supposedly "noxious" nonstop stimuli were all you ever knew, then in what sense could it be noxious at all? It seems it would just be "information". If so, then it doesn't seem obvious that momentarily discontinuing the stimuli--if it could even be considered stimuli, given its nonstop/constant presence--would be seen as an improvement; it seems more like it would reduce to being roughly equivalent to blinking.
In horror movies, the first couple "jump scares" can be fun, but if jump scares are all the movie has to offer, then it becomes something of a chore to remain engaged
Well, acclimatization is taken into account in that we don't care about the stimulus itself, but how bad it is for you. So if you are only half as bothered the second time someone stabs you, then that second stab is only half as bad on utilitarian grounds. So presumably for the hells to continue being as bad, the torture would have to continually increase in intensity, but we can just stipulate that this is the case. This seems like the best account to me, at least.
In section WSYDTD you miss what seems to me to be the most obvious option: compare the future with or without this putative person. If this child raises the AU of future folks, then it's going to raise the AU of all folks /relative to what it would be without it/. And if it lowers future AU then it lowers ditto. If it leaves the future AU unchanged then have it only if the future AU is higher than the past AU.
Still impossible to figure out, but not obviously morally wrong.
I actually mention something like this in a footnote to part 1 (understandable if you missed it). I am lazy, so I'll just copy what I wrote there:
The biggest/most obvious one is probably something like this:
Suppose you have two buttons, A and B. By pressing A, you create a person who will suffer in excruciating hellfire for a billion years. By pressing B, you create a different person who will live in perfect bliss for a billion years. [And suppose they will affect the welfare no one in either case].
On [this] view, we should here be neutral between the two options (if you got an old lolipop for pressing A, you would have more reason to do so), which is patently absurd! Moreover, suppose you do press A. Then at the exact moment you press the button, you suddenly get an extremely strong preference that you had not pressed the button. Worse yet, you knew that you would end up having this preference before pressing the button, but despite this you had no reason to not press it. That is even more absurd, I think.
That only applies to the versions where only people who exist in every alternative get counted. Straight AU of people in each world works with what I described.
I may be misunderstanding your proposal. If you mean that we should *not* include the person who is about to be created in our considerations, then we get the problem I just described, it seems.
But if you *do* count that person, then you will surely be right in creating them IFF they will have average or above average welfare: in the world where they don't exist, the average utility will be X. Unless they have X or more utility in their life, they will lower average utility (remember we stipulated that they affect no other person), and if they have more, they will raise it, meaning we should create them. But that just runs straight into the problems from the initial article.
I assume I am missing something, so it might help if you give an example of how you would figure out whether to create a given person--showing your calculation work, if you will :)
I wrote: > To be clear, I'm talking about the section where the AU proponent is deciding whether to have a child. <
To be even more clear, in THAT section there was no assumption that the child would have no effect on anyone else -- nor should there be such an assumption.
To be clear, I'm talking about the section where the AU proponent is deciding whether to have a child.
Also, you provided three different ways of calculating the average. I'm just going to go with option 1: (total utility of all relevant people) divided by (total number of relevant people). The math is the same for all three versions, but the words used to describe the numbers differ.
First let's do the math for your A/B example -- to show that you got that one right, and to demonstrate the way the math works.
In your A/B example, we can push button A and create person 'a' who suffers enormously, or we can push button B and create person 'b' who anti-suffers enormously. In your example, the only other difference is that if you push button A you get a tiny lollipop -- but there are no other changes to anyone's utility. Let's set the amount of positive utility for the lollipop to 0.01, the total DIS-utility of 'a' to a gajillion, and the total utility of 'b' to a bazillion.
Let T be the total utility of all people other than a or b, and N be the number of people other than a and b.
- The average utility of the A-world is (T + 0.1 - 1 gajillion) / (N + 1).
- The average utility of the B-world is (T + 1 bazillion) / (N + 1).
Clearly the average utility of the A-world is MUCH worse than that of the B-world. That's why any NORMAL person would prefer the B-world.
But under J. Mikael Olsson's rule where only people in both worlds count, those are not the *important* averages. Under his rule, the numbers we're supposed to use ignore both a and b -- because each is in only one world. But everyone else is in both worlds, so he says we should compare:
- The *relevant* average utility of the A-world is (T + 0.01) / N.
- The *relevant* average utility of the B-world is T / N.
Thus under *his* rule, world A is preferred. Thus we conclude (as you did, only without the explicit math) that his view is deeply counter-intuitive!
So from here on in we ignore the rule that only people in both worlds count.
----
Now, you wrote:
> AU really says: you should bring a new person into existence when that person will have a life that is at least as good as the average life. <
That's not precisely correct. The precise claim is /You should bring a new person into existence when that person will raise the average utility above what it would be without that person./
You ignored the effect this new person would have on other lives. For example, the effect they have on the lives of their children and grandchildren. If the new person has a below-average life but has multiple descendants with above-average lives, then this person may still cause a general rise in the average utility.
So, let's look at how an AU couple should reason about having a child -- under the all-people-count rule.
The choice here is between having a child and not having a child. Note that there's no change to the past regardless of which choice they make, so they are deciding between two futures: F0 (no child) and F1 (a child). They want to choose the better future, by which they mean the future with the greater average utility. Let T0 and N0 be the total utility and number of people in future F0, and T1 and N1 be the corresponding values for future F1. The couple want to maximize the average utility of the future:
- the average utility of F0 is A0 = T0 / N0
- the average utility of F1 is A1 = T1 / N1
If A1 > A0, they choose to have a child; if A1 < A0 they choose not to; otherwise it's optional.
----
Now I will note a complication. I said that the past doesn't change, so the people are choosing a future and thus want to maximize its value. I think *that* is the rational way for the AU couple to decide.
But...
... if we include the past with the future (as you mostly did), then the calculation changes a bit. IF that's how the AU couple views the situation, then they'll want to consider the average over P + F0 versus the average over P + F1 (where P is the past up to their present). That is, they'll need to calculate
- the average utility of P+F0 = P0 = (T + T0) / (N + N0)
- the average utility of P+F1 = P1 = (T + T1) / (N + N1)
(where T and N are the total utility and number of people for the past).
These numbers can lead to a different decision than the earlier ones. For example, let
- T = 50
- N = 100
- T0 = 200
- N0 = 100
- T1 = 115
- N1 = 50
Then
- A0 = 200 / 100 = 2.0
- A1 = 115 / 50 = 2.3
- P0 = (50 + 200) / (100 + 100) = 250 / 200 = 1.25
- P1 = (50 + 115) / (100 + 50) = 165 / 150 = 1.1
Here A1 > A0, but P0 > P1. Even tho' the future is better with a child than without, the total history of the world is better without the child. (F0 is "bigger" than F1, so its average, tho' smaller than F1's, does more to move the overall average.)
The math is the same if, instead of adding past to future, we add a distant (non-interacting) civilization. (T and N would then be the total utility and number of people in that distant civilization.)
I think (and I think you'll agree) that the distant, non-interacting civilization should have no bearing on our couple's choice.
I think (but I don't know whether you agree) that the past should have no bearing on our couple's choice.
In any case, like the three ways of calculating the average, these are differing options for the AU couple, and for AU reasoners in general.
Thank you for that thorough response! :)
I think your way of calculating was actually what I had in mind in the original article. I was simply very imprecise with my wording (which is on me, of course). I actually do think I offhandedly mention that the child will make others happy in the example, but I of course did very little to address that in more detail (again, my bad!). I definitely agree that AU should consider effects on other people too in procreative decisions (as should any theory), I was simply intending to focus on the person being created specifically, as that makes the reasoning easier, and the implausibility more apparent--but I think that the conclusions generalize to when you consider the effects on other people too.
For a simplified example, suppose that there exist 4 people with utility 10 each (very good lives), and they are considering whether to bring another person into existence. They have good reason to believe that this person would have a welfare level of 5 throughout their life (still pretty good), and would make the other four better off by 1 util each. We then get:
A0 = 40/4 = 10
A1 = (44 + 5)/5 = 9.8
This means that AU tells us not to bring about a happy person who positively affects everyone around them.
We can then imagine that these four people exist in a larger world with 96 people with equally good lives on average (10 utils). Assume that the created person would still only positively affect the four people with 1 util each. We then get:
A0 = 1000/100 = 10
A1 = 1009/101 ≈ 9.99
Again, we should not bring about a happy person who would only positively affect the world. Now suppose that the 96 other people are actually not quite as happy, with average utility of only 5 each. We then get:
A0 = 520/100 = 5.2
A1 = 529/101 ≈ 5.24
We now get the result that we *should* create the person, even though nothing about the person, nor their effect on others, changed. The only thing that changed was the average welfare of people who are in no way affected by the action (we may suppose that they live on the other side of the globe). This seems pretty absurd to me!
If you think that it's too implausible that the person would only ever affect 4 people, we can suppose that they will affect all people, making person 1 better off by 1 util, person 2 better off by 1/2 utils, person 3 by 1/4 utils and more generally person n by 1/n^2 utils. Thus the total positive impact would be close to 2 utils. This, by the way, seems (very, very roughly) like a pretty reasonable model for how we actually affect the whole world--we affect everyone, but the further we are from someone, our effect asymptotically approaches 0.
As an aside, I don't agree with you that we can't meaningfully focus on the individual in isolation and still get something meaningful out of it; it simply looks like AU gets things wrong in that it can be bad to create a person with a life very much worth living, or good to create a person with a decidedly terrible life, *all else being equal*. If we need an actual case, we can for example consider whether creating an infertile hermit with a life very much worth living--though below average for the world as a whole--would be good. TU answers yes (which seems correct) and AU answers no (which seems wrong). Likewise with a miserable hermit in an even more miserable world--here we should plausibly not create them. But this last part is not so important, and is really a minor methodological dispute, I think.
----
As for whether to consider past lives, I agree that these shouldn't count--I simply started with the Egyptology example because that is the most prevalent in the literature. I likewise agree that the welfare of distant civilizations shouldn't affect our procreative choices. But it seems much harder to avoid the latter on AU. After all, the alien civilization exist and will exist in the future. We may say that we should only count beings that we will interact with, but this has the very strange implication that we sometimes shouldn't interact with aliens, even if doing so would be better for everyone.
Suppose that the total future welfare of the aliens civilization (TA) is 50 and the population (NA) is 50. On earth, the total welfare (TE) is 100, and the population (NE) is also 50. Suppose that if we make contact with the aliens, both civilizations' total welfares increase by 10. We then get:
AN = 100/50 = 2
AC = (110 + 60)/(50 + 50) = 1.7
Where AN is the average given no contact, and AC is the average given contact. This would mean that we have strong reason not to interact with the aliens, even though doing so would be better for both civilizations. This seems very implausible to me, but the alternative of distant, undiscovered civilizations affecting the goodness of our procreative choices seems equally absurd. There may of course be a third option that avoids all these issues, but I have a very hard time seeing it (though I am of course open if you have any suggestions).
Furthermore it seems hard to define "interact" in a non-arbitray way that wouldn't also count photons from earth reaching the alien planet as interacting, which they presumably would have for many years.
This can of course also be relevant to procreative decisions--we can just translate your example (as you also suggest):
- TA = 50
- NA = 100
- TE0 = 200
- NE0 = 100
- TE1 = 115
- NE1 = 50
So in general it just looks like any version of AU will have the pretty absurd conclusion that our procreative decisions should depend on the welfare of outside people who are almost or completely unaffected by the decision (or accept that we shouldn't contact aliens, when doing so results in a pareto improvement).
> The only thing that changed was the average welfare of people who are in no way affected by the action (we may suppose that they live on the other side of the globe). This seems pretty absurd to me! <
I agree. I wasn't so much defending AU as encouraging better criticism -- which you have now provided. Hooray!
Having said that, tho', there is an error in your alien contact story. You say that AN and AC are the No-contact and Contact average utilities, but you use different rules for what utility to count. For AN you use only the Earth numbers, while for AC you use both Earth and Alien.
The AU would either be concerned about the aliens' utility, or not concerned.
- If they are concerned, then AN should include the aliens' numbers (TA and NA) along with the Earth numbers (TE and NE), giving AN = 1.5 (vs. your 2.0).
- if they are not concerned, then AC should include only the Earth numbers, giving AC = 2.2 (vs. your 1.7).
Either way, contact is preferred (1.5 vs 1.7 OR 2.0 vs 2.2) -- as should be expected by the contact adding 10 utiles at each location.
So that particular criticism fails, but given all the other criticism you've provided I don't think it's a problem for your anti-AU position.
What I don't get about these utilitarian calculations is how they treat (or don't treat) the effects of acclimatization. For example, in the Twin Hells experiment, if the supposedly "noxious" nonstop stimuli were all you ever knew, then in what sense could it be noxious at all? It seems it would just be "information". If so, then it doesn't seem obvious that momentarily discontinuing the stimuli--if it could even be considered stimuli, given its nonstop/constant presence--would be seen as an improvement; it seems more like it would reduce to being roughly equivalent to blinking.
In horror movies, the first couple "jump scares" can be fun, but if jump scares are all the movie has to offer, then it becomes something of a chore to remain engaged
Well, acclimatization is taken into account in that we don't care about the stimulus itself, but how bad it is for you. So if you are only half as bothered the second time someone stabs you, then that second stab is only half as bad on utilitarian grounds. So presumably for the hells to continue being as bad, the torture would have to continually increase in intensity, but we can just stipulate that this is the case. This seems like the best account to me, at least.
A few comments made here: https://justiceandhedonism.substack.com/p/average-utilitarianism-a-restatement