16 Comments

I wasn't intending to give a general theory abotu interpersonal comparisons of rightness, in part because I don't think there's a general theory and I share many of your concerns. The principle is just that if there's a sizeable chance that something is super duper wrong--about as bad as being the most prolific serial killer in history--you shouldn't do it absent very strong reasons.

Even on your model, the argument goes through. A rights believing deontologist will have most of their theories points allocated to not being as bad as a mass serial killer, and thus not eating animals.

Expand full comment
author

Ah okay, I can sympathize somewhat with the idea of giving a model-neutral argument, though that makes the inference a lot weaker. I also think that even if you don't have an explicit model, many of the sorts of considerations that count in favor of the moral risk argument also make the objections I listed salient. I think this I because even if we're not explicitly using any specific model, we are subconsciously using something like expected outcome for our thinking, which of course leads to all these problems.

I agree that the argument goes through for deontologists, since we are here talking about descriptive uncertainty, ie., whether animals have rights. But as I read you, you were also intending the argument to work for people like yourself who are quite confident in some sort of consequentialism/utilitarianism. But the argument doesn't go through for these sorts of people--at least it should have very little persuasive power: If your credences are something like 80/20 on utilitarianism/deontology, and VT for buying meat is 0.6 on utilitarianism and 0.1 on deontology, you still get a rightness of 0.51 for buying meat. So even if deontology *strongly* prefers you don't buy meat (say, you are almost certain that animals have rights given deontology), utilitarianism still counts most, meaning these considerations for moral risk are only relevant in case you are just on the fence on whether to eat meat.

Of course, I think utilitarianism strongly prefers not eating meat (at least in most circumstances), but that's besides the point.

Expand full comment

I am working under the assumption that I have grossly misunderstood your model, but here is how I see the numbers panning out for a hypothetical T2-believing utilitarian non-vegetarian Alan confronted with T1:

Alan: "Well, if T1 says a1 is horrible and equivalent to killing 1,000 people, and a2 is the relatively mundane act of abstaining from meat, then I might as well set the values

VT1 of a1 = 0 and VT1 of a2 = 1,

because there is a Huge difference between horrible and mundane, and these values represent the maximal allowable difference on a scale ranging from 0 to 1.

Also, since a1 and a2 are both mundane under T2, it follows that I might as well set the values

VT2 of a1 = .5 and VT2 of a2 = .5.

That is, given that "1 = mundane - horrible", I might as well say "mundane = mundane" (in other words, it is difficult to conceive of any miniscule epsilon > 0 to meaningfully compare the mundane to the mundane, given that the already small 1 compares the mundane to the horrible).

Now, T2 has been a staple belief of mine, and T1 is somewhat believable but probably bullshit, so I should put

PT1 = .1 and PT2 = .9.

Therefore, the Rightness calculation is as follows:

R of a1 = (.1 x 0) + (.9 x .5) = .45

R of a2 = (.1 x 1) + (.9 x .5) = .55.

Now, even though a1 meets the criteria for conscientiousness, as a1 is permissible under the more probable theory T2, I am nonetheless taken aback by the following:

(R of a2) - (R of a1) = .1,

which is 10% of the difference between the the mundane act of abstaining from meat and the truly horrific act of killing 1,000 people. How shall I translate this calculation? The salient interpretation seems to be something like this:

"Choosing a2 over a1 is equivalent to saving 100 lives."

Because I am a mathematically-minded utilitarian, I attach a ton of credence to this fancy R scale, and a T3 emerges, such that:

1. T3 = "Actions should be determined by the Rightness metric"

2. a1 is not permissible under T3

3. T3 is even more probable than T2,

and I am left to conclude that a1 is now no longer conscientious"

So, unless the model has mechanisms that prevent the emergence of new theories, I see no way around this kind of issue

Expand full comment
author

I guess my response to this sort of worry would be that normative-ethical theories cannot reference the rightness metric, since it is "meta theoretical".

Expand full comment

That pragmatic response, if true, certainly slices through such objections. But we must imagine Alan, with his intuitive gears grinding, asking himself about the hypothetical implications of such a response being refuted, every time he sits down to a steak dinner. Although this is not a threat to his conscientiousness as formally defined, it does threaten his conscientiousness in a more "colloquial" (and shall we say...pragmatic?) sense

Expand full comment

Even though we all certainly agree that

Mathematics and Formalism are oppressive forms of White Male Imperialism that must be excluded from any productive discourse, let us attempt to construct a mathematical formalization of a Bulldog-type objection that constitutes a robust refutation of your claim (loosely paraphrased, "Bulldog's Moral Risk Argument need not be taken seriously by utilitarians") using the very model you introduced to demonstrate the claim:

We introduce the following variables:

T1 = "Many many animals eaten over the course of a lifetime is equivalent to many humans eaten"

T2 = "Many many animals eaten over the course of a lifetime is equivalent to zero humans eaten"

a1 = "eat meat"

a2 = "do not eat meat",

and we suppose the following:

1. PT1 = very low

2. PT2 = very high

3. VT1 of a1 = very very bad.

4. VT2 of a1 = VT1 of a2 = VT2 of a2 = permissible

Yet, mere permissibility is inconsequential to the model's definition of Rightness (that is, R of a1 and R of a2), given the weighting mechanism of R and the significance of "very very" compared to "very".

Thus:

5. The resulting ratio R of a1 to R of a2 implies "a2 is very very favorable to a1"

However,

6. Since PT2 is very very much higher than PT1 and a1 is permissible under T2, it follows from the definition introduced in the article that a1 can be done conscientiously.

Unfortunately, however, the very nature of the introduced model gives rise to a new T3, which we define as follows:

7. T3 = "Actions should be assigned values based on R"

As a result:

8. VT3 of a1 = very very bad

Moreover, given the weighting mechanisms of PTn and the exceptionally high credence utilitarians give to bullshit mathematical models, we have the following:

9. PT3 = exceptionally high > PT2 > PT1

Also, from Points 5, 7, and 8, we have:

10. T3 does not permit a1

11. Therefore, a1 can not be done conscientiously longer under the model

12. Therefore, Bulldog's Moral Risk Argument does indeed need to be taken seriously by utilitarians

Expand full comment
Sep 19Liked by Silas Abrahamsen

Hey, I'm curious, are there any philosophers who think that objective list utilitarianism and prima facie rights can go together? Or are those two ideas totally incompatible?

Expand full comment
author

I may not be the best authority here, but they depending on how you work it out, it certainly seems compatible.

For example, you might have an objective list consequentialism, where the badness of someone being killed is greater than the negative welfare (or other negative values) resulting from death. Likewise, the badness of your car being stolen is greater than the amount of negative welfare (or other negative values) it produces.

The only point where it becomes a problem is when you start introduction things like a DDA or agent-relative duties, eg. if you have a stronger reason against actively killing someone than for stopping someone else from killing someone.

But off the top of my head I am not aware of anyone holding a view like this, though someone probably does.

Expand full comment
Sep 20Liked by Silas Abrahamsen

I might be one of those.

Expand full comment

Just round all 99% certainties about philosophical frameworks up to 100% to avoid "stupid stuff getting utilitarianly multiplied out of the stupid zone."

This one little move is probably a smarter play than any quantized calculation of rights & pleasure & whatnot that anyone has ever done. And you're allowed to make it; commitment beyond certainty is a crucial part of coping with the human condition.

Expand full comment
author

For one, this doesn't fix the problems I mention for anyone with less than 0.99 credence in any moral theory (which I think is almost everyone).

Secondly, it looks horribly arbitrary. Why 99% rather than 98% or 99.99% or 127/128? The only reason it doesn't seem arbitrary is due to the counting system we have.

Thirdly this just looks completely irrational. It doesn't look like someone who has a 99% credence in utilitarianism should be willing to bet an eternity in hell for a 1/1000 chance of getting $1, but they should under your suggestion.

Expand full comment
Sep 21Liked by Silas Abrahamsen

"99" isn't to be taken literally; it's just a stand-in for being almost totally confident. The silliness of choosing 99 as the threshold is comparable to the silliness of thinking we can quantify a confidence level in a philosophical theory using a scalar. (Both have silliness quotient of exactly 341.3... give or take.)

Wendy tells Peter that if he does not attend Mass each Sunday and holy day of obligation (without a dispensation) he will be gravely sinning and shall go to hell for all eternity. The round-up heuristic allows Peter's near total confidence that her assertion is wrong to shut it down before it multiplies by its own absurdity. Lending tiny credences to wild suggestions with arbitrarily high awful or rewarding consequences (as high as *imagined*) is something you *can* opt to do, but it certainly isn't forgone that it's *rational* to do so. I suspect it is goofy to do so, hence our sanity heuristic.

Expand full comment
author

I agree that it is very optimistic that we should be able to represent our actual credences with exact scalars--even if God could in principle know my exact credence in some proposition, I simply don't have the degree of introspective precision to know what it is. But I still think it is the best we can do to model this sort of stuff (though I agree I was maybe a bit too pedantic on the point of the exact cutoff point).

I am not too sure about the sort of pascal-style example you raise. If I was given the following bet:

If the external world exists, you get a lollipop, and if it doesn't, you get sent to hell right now for all eternity.

I wouldn't take it. Furthermore, I don't think I should. Likewise with any belief I am extremely confident in.

I think the reason we are less ready to take pascal's wager in general is that the result is so far in the future, but if we make the wager vivid and close enough, I think it starts to have more intuitive pull.

Still I am not completely certain about what to say about these sorts of arbitrarily large risks, but it doesn't seem completely crazy to me just to accept that we should take them seriously. It at least seems more plausible to me than that we can just ignore them completely.

Expand full comment

Pascal's Wager should be seen as a parable showing why it is silly to give imagined scenarios that have implications *designed to insanely multiply the rationality of some imagined choice* any credence: Because it rationalizes innumerable mutually exclusive courses of action like a broken gushing water main.

Some people see this and go, "Get the buckets!" (analogous to the endless "symmetry breaker" response game). Instead of doing that, a game which shall not and cannot end, we can just use an epistemic heuristic that doesn't give the microphone to wildly contrived proposals with absurd epistemic conditions.

The first approach is loudness (whereby we get nonsense about utility monsters and stance-independent value points and proving creators with beth numbers).

The second is quietism. We can *opt* (all heuristics are bootstrapped) to call the absurd "absurd" right when it sprouts, and then we don't have to face a yard overgrown with weedy, endlessly diverting absurda.

Expand full comment

I've been thinking about this recently as it applies to utilitarianism who are uncertain about which aggregation principle is correct. For example, let's say you have credence p in total utilitarianism and 1-p in average utilitarianism. The naive way to calculate the expected utility of an action is p*(total utility)+(1-p)*(average utility), but total utility will pretty much always dominate that calculation, even if you're very confident in average utilitarianism, since there are just so many people. That seems wrong - someone who believes average utilitarianism with high confidence should be making most of their decisions based on average utilitarianism.

Your theory can accommodate this intuition, but I'm worried it can't accommodate a different intuition. Let's say we have a world of 10 billion people, each with 1000 utils. We are considering to create another person with U utils, where 0<U<1000. Total utilitarianism says you should do this, and average utilitarianism says you shouldn't. Since the alternative produces zero aggregate utility in either case, both theories assign a value of 1 to the better option and 0 to the inferior option, regardless of the value of U. But surely it should make some difference to your action whether U is 1 or 999. In the former case, creating the new person is barely good on total utilitarianism, but (relative to other values of U) very bad on average utilitarianism. In the latter case, it's very good on total utilitarianism and barely bad on average utilitarianism. If I had, say, 51% confidence in average utilitarianism, it seems like I should still want to create the new person if U is 999, but not if U is 1.

Expand full comment

The first formalization seems to define contientiousness as acting according to our regular principles we believe and not contrary to other principles we find believable. If a utilitarian formulated a default pro-carnivore theory T based on abstract nonsense typical of utilitarians, then it seems an alternative T' that demonstrates "pro-carnivore diets have an expected value of serial murder" could indeed threaten his contientiousness, because expected values reign so supremely in utilitarian epistemology that it is difficult to imagine how he might find a pro-carnivore theory T" more believable than T' without utterly refuting T' altogether

Expand full comment