To continue the proud tradition here at Wonder and Aporia of writing introductions that discourage readers from reading my posts, I’ll start off by saying that the topic here isn’t as sexy as it might sound from the title. I sadly won’t be suggesting that you go around advocating for crazy causes with protest signs, all Westboro Baptist Church-like. Instead I’ll be defending the position called “fanaticism” in decision theory—though real ones will find this topic equally riveting. This position is basically:
Fanaticism: For any finite amount of value x and probability p, there is a finite amount of value x+, such that an action with probability p of bringing about x+ value (and is otherwise neutral) is preferable to an action that brings about x value with certainty (or high probability for that matter).
Or in plain English: For any action you could take that you’re sure will be very valuable, you could in principle be presented with an option that has a very low probability of being very valuable, which you should prefer. For example, say that you’re sure that you could save 100 lives, then fanaticism holds that there is some larger number of lives, such that you should prefer a 0.000000000000001% chance of saving that many lives.
Fanaticism can both be applied to morality and prudential rationality, though I think it’s right in both cases. It also doesn’t entail expected value theory, though expected value theory plus unbounded value (i.e. the view that there is no maximum or minimum to the amount of (dis)value that could be instantiated) does entail fanaticism. I do tend to think that this sort of expected value theory is correct, and I’ll be defending fanaticism in the form of this view in much of this post, though you don’t need to subscribe to expected value theory to be a fellow fanatic.
Against Alternatives
I generally think that unbounded expected value theory, and thus fanaticism, is pretty plausible on the face of it (though there are of course some obvious implausible implications). Hence I think a plausible defense of the theory simply consists in answering objections—and in answering the objections, you hopefully come to see that fanaticism actually gets the right results, and so should be accepted. But before doing that, I think it’s worth looking at a couple of obvious alternatives, and why they fail.
Bounded Value
As mentioned, expected value theory plus unbounded value entails fanaticism. In fact, I think fanaticism is pretty much indefensible given bounded value. Remember, fanaticism requires that for a certainty of any amount of value, there is always a preferable option that gives you non-certainty of some amount of value. But imagine that the certain amount of value is the maximum amount of value (or if there is only an asymptotic bound, some value arbitrarily close to the bound). In that case it’s absurd to think that there could be any preferable option with a 0.00000001 probability of any amount of value within the bound.
Luckily for aspiring fanatics, it’s also super implausible that value is bounded. It might have some degree of plausibility for individuals, since more things that have instrumental value generally have diminishing marginal intrinsic value—it might just be that you can only experience so much value within a given time-frame, and your life only has a finite length. But for things like moral value, this is a lot less plausible, unless you are an average utilitarian, which you definitely shouldn’t be. After all, adding a new happy person is just as good the first time you do it as the 143782nd time you do it—or at the very least, it doesn’t stop being good to add new happy people at some point, if it was good to begin with, all else being equal.
But even for individual lives, this isn’t very plausible. After all, there’s a possibility that the future is infinite, and there’s a possibility—no matter how small—that you could continue to live forever, perhaps by God placing you in heaven (or hell if you aren’t subscribed):
If this is right, then the bounded value view would have to hold that it would at some point stop being good for you to experience good things.1 But that’s just super implausible! If it’s good for me to enjoy an ice cream when I’m 5 years old, it’s also good for me when I’m 3412 years old (of course ignoring things like whether you’ve become bored with the experience). And remember, all it takes is that it’s epistemically possible that you could live forever. So long as you don’t literally have a credence of 0 in your possibly living forever, this is a problem for this view.
So I think there is little to no hope for accepting that value is bounded, both for morality and prudence.
Discounting Small Probabilities
The most obvious way of avoiding fanaticism is simply by counting outcomes with some sufficiently small probability as if they have zero probability. Sadly I also think this is a terrible option. The main problem with the view is that it will require some principled way of individuating possible outcomes—something I previously pointed out in my post on why Bentham’s Bulldog is wrong about moral risk.
Say that the threshold for discounting probabilities is 1/n (and say that 1/n<0.1). Suppose now that you have a bag with balls numbered 1-10. You are now offered to play a game where you draw a random ball from the bag, and win an amount of dollars equal to the number on the ball drawn, and this game has a buy-in of $1. It seems pretty obvious that you should play the game, since you will at worst win back what you paid.
But suppose now that more balls are added, such that the bag now contains balls numbered 1-n, and the rules remain the same. This means that you now just have a high probability of winning even more money, with no drawbacks, meaning you should obviously play this game too. But not so fast! The chance of drawing ball 1 is 1/n, and likewise for all the other balls. This means that if you ignore probabilities of 1/n or lower, you should think that you have a 0% chance of winning any amount of money, meaning you should expect to lose money from playing, meaning you shouldn’t do so. But at the same time you have a 100% chance of winning at least $1 and a greater than 1/n chance of winning much more than $1, meaning you should play. This means that you both should and should not play—a straightforward contradiction!
What is missing here is a principled way of individuating outcomes, since the contradiction arises from considering options at different levels of detail. The most obvious way to do this, is to individuate at the most detailed level. The problem with this is that it would get you the result that you should not play the improved lottery from above. But this is just completely absurd! I mean you just straight up increased your chances of winning a lot of money—how in the world could it be right that you shouldn’t do this when you should play the worse lottery.
On top of this, this would probably mean that you should assign a 0% probability to anything happening ever in the real world, since things can happen in many different ways. For example, if you flip a coin, there’ll be a 50/50 chance of it landing heads vs. tails. But this is not quite true, since it could actually land heads and land 5 cm from the edge of the table, land heads and land 5.0001 cm from the edge of the table, etc. If you individuate the options at the maximum level of detail, there’ll be a very low chance of any particular thing happening, and so you should round that down to a 0% chance of anything happening.
But if you instead individuate on a different level, you’ll run into problems of arbitrariness. Why should I consider options in pairs of 3 rather than 4? Or why should I only consider it an option that the coin lands heads, and not consider it an option that it lands heads and lands on the table. Any way of individuating options will be extremely arbitrary, and you might even have to come up with a new rule for every new decision you consider. Apart from this, you’ll just be missing out on a lot of detail; it won’t even be an option for you that you win $9 in the lottery from above—only that you win more than $5 and less than $10, or whatever. But surely you should consider it at least an option that some particular thing happens, even if it’s improbable that it would. I just think the option of discounting low probabilities is hopelessly terrible and should be rejected out of hand.
A general argument
I actually think you can show that if we accept unbounded value, any non-fanatic view will have to bite some pretty bad bullets. Remember we are considering two options: Certainty of value x, or probability p of some greater value x+. Fanaticism would choose the former, and non-fanaticism the latter. I will additionally stipulate that p*x+ > x, since not even expected value theorists would be fanatics if this weren’t the case—I don’t intend to defend a fanaticism where choosing the risky option has a lower expected outcome.
Imagine now that you have two roulette wheels, and you can bet once on one of them. The first simply has a single possible outcome: you get value x. The second has 1/p slots,2 and if you bet on the right slot, you get x+ value. So you either have a 100% chance of getting x value, or a p chance of getting x+ value. By definition, a non-fanatic should prefer betting on the first wheel, and a fanatic should prefer the second.
Now instead imagine that you can place 1/p bets before the wheels are spun. Now the fanatic has a clever strategy: they place their 1st bet on slot 1 on the risky wheel, their 2nd bet on slot 2, …, and their (1/p)th bet on slot 1/p. This gives them a 100% chance of winning x+ value. Someone who placed all their bets on the safe wheel will instead have a 100% chance of winning x/p value. But since we stipulated that p*x+ > x, it must also be that x+ > x/p. Thus it is strictly better to bet according to the fanatic strategy than to place all bets on the safe wheel.
But no single bet had different odds than when doing only one, yet betting non-fanatically each time is strictly worse than betting fanatically each time. Since I assume that non-fanatics won’t accept that you should choose the strictly worse set of bets, they must accept that whether or not you should bet on the risky wheel must depend on whether or not you have already betted on it/whether you will get the option to do so again.
But that looks very implausible. Whether or not you have betted/will bet again changes nothing about your current bet. The probability that the ball will land in slot 7 isn't at all affected by whether you have already betted on 6. But then whether or not you have already betted on 6 shouldn't make any difference to whether you bet on 7. At each step in the process, the non-fanatic can raise any anti-fanatical argument they want against betting on the risky option. I think the arguments will be as compelling here as they were in the single-bet case, yet here it’s clear that accepting them is strictly worse.
I suspect that implicitly thinking in terms of money may make non-fanatical answers look more reasonable here, due to diminishing marginal value: of course you shouldn’t risk it in the first case, because there’s a big risk of winning nothing, and that outweighs the small chance of winning big. But when you can win big with certainty, you should obviously do so. While this makes sense for money, we are here talking in terms of value, and value doesn’t have diminishing marginal value.
A better analogy, then, is human lives, since the value of a life doesn’t diminish based on how many already exist. Imagine then that there are a gazillion people about to be crushed by a very scary human-crushing-device *gulp*. Each time you bet, you can either have certainty of saving x lives, or have a p chance of saving x+ lives, through betting on the respective wheels. Additionally, each time you bet, it’s new people you’re potentially saving, so no one gets two chances at being saved.
The non-fanatic will choose the certainty of saving x people if they can only bet once, though if they can bet 1/p times, they should obviously save the x+ people. For this change of strategy to make sense, it must obviously make some difference whether you’re betting on the risky option once, or doing it several times. But when deciding between the risky option and the safe option for any particular case, it doesn’t make any difference whether that choice is part of a sequence, or a single choice; no one who’s affected by that choice is affected by previous or future choices, and these are by definition the only people being affected by that choice. Thus it shouldn’t make any difference to your choice whether you’re betting once or doing it several times. And so since it’s preferable to choose the risky option many times, and since it never makes any difference for any individual choice whether or not it’s part of a sequence of choices, it’s just straightforward that it’s also preferable to choose the risky option when you can only bet once.
This is just a bare-bones case against non-fanaticism, though I hope it will show that non-fanaticism is pretty unappealing, so that while fanaticism has some implausible implications, non-fanaticism is a pretty big bullet to bite.3
Objections to Fanaticism
But it’s not all sunshine and roses for the fanatic, and there are many convincing counterarguments. I think these objections are hard, and I don’t have easy solutions. But common to most (maybe all) good objections I’ve encountered to fanaticism or expected value theory is that they involve infinity, which ruins everything anyways. My approach will then simply be to show that non-fanatical answers will also have to bite big bullets—we’re all screwed.
Pascal’s Mugging
One of the most initially compelling counterexample to fanaticism—though also one of the least persuasive IMO—is Pascal’s mugging. The story basically goes like this:
Pascal is taking his daily 17th century bus commute, as a weird looking stranger wearing a strange wizard-style hat approaches him.
Stranger: Greetings, dear friend! Today you have been blessed by Lady Fortune. Before you is standing the great Econimus Maximus, wizard of commerce! Fate has been smiling on you, and all you have to do to catch the moment is hand over your wallet this instant. If you do this I swear to God, Allah, and my mothers grave, that I will give you the double, nay the triple, back tomorrow. The chance to invest is now!
Pascal: I’m not stupid. I know you’re just gonna take the money and never come back with it.
S: I understand your skepticism, friend, and I see that it seems unlikely that I would return the money, given all you know. So let me raise the stakes for you: for every franc you give me, I’ll give you a thousand in return tomorrow.
P: I’ve already caught onto you. No matter how much you offer me, I know you’re lying.
S: I know who you are, and I know you’re a fanatic. That means that if I offer you enough, at some point you will have to accept my offer, no matter how small the probability is—anything else would be irrational. Name your price so we can get this over with.
Obviously no one in their right mind should accept the offer, no matter how high it is, since the person is obviously lying. Yet he makes a quite compelling argument on the fanatic’s behalf: given the definition of fanaticism, you should prefer to take the bargain if the potential payout is high enough, no matter how unlikely it is that the person is telling the truth.
The problem with this argument is that it assumes that the probability is the same each time the stranger makes an offer. But that is just not the case. Say you judge the probability that the person is telling the truth, when they offer to double what you give them, is 1/1000. In that case they just need to offer to pay you back 1001 times for it to be preferable. The thing is, that’s not true. When they increase their offer, the probability that they’re telling the truth drops—it’s much less likely that they’re telling the truth, now that they’re suddenly willing to pay you back more than thousandfold. In general, the more money the person offers you, the lower the probability they’re telling the truth, such that the probability is always lower than what it would need to be for it to be preferable to accept.
How can you know that the probability is always lower? Well, I don’t have statistics on it, but I think it’s pretty obvious that in a world where people accepted Pascal’s muggings, it would be the muggers that would earn money, and the muggees that would lose it.
One interesting variation is if the mugger offers you infinite utility (say an eternity in heaven) if you hand over the wallet. Surely it’s not epistemically impossible that they’re telling the truth, and so the expected value of giving the wallet suddenly becomes positive infinity. To this I would simply say that the probability that you get an eternity in heaven for handing over the wallet in this case likely isn’t greater than if you just gave it to any random person. This will obviously depend on your prior commitments, but most people will probably think that there’s a non-zero chance that there’s a God who rewards good actions in heaven. Now, as I see it, the chance that such a God exists is much greater than the chance that this person is telling you the truth, so it’s not really much of an increase in your chances of going to heaven that they’re making this offer. In fact it might hurt your chances, depending on your prior commitments, since God would perhaps reward selfless actions more than selfish ones, and giving the wallet for a chance of infinite utility might count as a selfish action.
St. Petersburg Paradox
Another common argument against fanaticism is the St. Petersburg Paradox. Or, well, it’s more an argument against expected value theory, and I think there are possible versions of fanaticism that avoid it. Still, it’s certainly worth discussing. I think this is a vert formidable objection, and it’s one of the considerations that I think count the strongest against fanaticism.
It basically goes like this: the St. Petersburg game is a game where you flip a coin a coin until you get a tails. The stake begins at 2 units of value (say days in heaven). For each heads you land, the stake doubles, and once you get a tails, the stake is paid out to you. So if you land a tails on the first throw, you receive 2v (implicitly understood as units of value going forward), if you land one heads, you get 4v, two heads you get 8v, and so on. Additionally, there’s no limit to the number of heads you can potentially land in a game. As it turns out, the expected value of this game is infinite:
This means that you should be willing to pay anything—endure any amount of suffering—to get the chance to play this game once, according to expected value theory. This is a bit of a problem since you obviously shouldn’t do that. You’ll most likely be winning around 2-8v, and so it seems absolutely absurd to risk anything to play this game.
I think this is a very strong argument, and I’m honestly not sure what is the best solution for fanaticism. Still, I don’t think any theory gets out of it unscathed, and so I don’t think it’s too worrying.
Imagine a variation of the game, where instead of having potentially infinite throws, you lose the game if you land more than 2 heads in a row, winning nothing. In this case you have a 1/2 chance of winning 2v, 1/4 of winning 4v, 1/8 of winning 8v, and 1/8 of winning nothing. How much should you pay to play this game? Here you should obviously be willing to bet up to 3v. If you bet less than that, you’ll be gaining on average. Now we extend the limit to 3 heads, in which case you should bet up to 4v. I suspect you can all predict where this is going. Each time the limit is increased by one heads, the amount of value you should be willing to bet increases by 1. If you don’t accept this rule, then we can run arguments like those discussed in the section above against your position; you’ll probably have to find some way of individuating options in a non-arbitrary way that doesn’t lead to absurd results.
But it looks even worse for you if you do accept this extrapolation, and don’t accept that you should bet infinitely on the game. This would of course mean that you should at most bet some finite amount of value, n, to play the infinite St. Peterburg game. Consider now the game where the limit is n heads. According to the rule above, you should be willing to bet n+1 value to play this game. But the infinite St. Petersburg game is simply a straightforward improvement on this game, since it has all the same potential outcomes plus additional positive outcomes. Thus you would need to hold that univocally improving a game could make it worse. This is just completely absurd to me.
A way to weaken the intuition against fanaticism might be to consider expected money you would win, descriptively, rather than how much you should be willing to bet. Here the answer is pretty clear. If a bookie were to offer you to play this game infinitely, then as the number of times you’d play tends towards infinity, the amount of money you’d win per game also tends towards infinity, regardless of how high the buy-in were. But there’s no difference between each consecutive round, so the expected winning per round is infinite. Thus you should be willing to bet any amount of money to play this game once, if your goal is simply to optimize money.
This seems highly unintuitive—how in the world could I expect to win infinitely off of a game where I win $2-8 most of the time, even if the buy-in is $751235985933001753? Nevertheless, that’s just how it is, and the fact that it’s unintuitive speaks to how unreliable our probabilistic intuitions are in certain cases, not to it actually being wrong. If you switch out money for value, then the intuition that it can’t be worth it stays the same. But as we saw with the money, this intuition is unreliable, and so it might just be that we’re wrong in thinking that it’s not worth it, because we’re bad at imagining large numbers or something.
Still, I’m not gonna pretend that this doesn’t keep me awake at night, and if I were offered this game, I probably wouldn’t play it if the buy-in was 1 trillion years in hell. Maybe that’s because I’m irrationally risk-averse, or because I’m secretly not a fanatic. Still, I think other positions have equally big problems, and I think it might just be because infinity screws things up.
Pascal’s Wager
This last one isn’t really originally an argument against fanaticism, but an argument from fanaticism to belief in God. Still, many people find the conclusion very counterintuitive, which tells against fanaticism. The idea is basically that you can choose between believing in God and not believing in God. Now, if you don’t believe and God doesn’t exist, you might get some finite benefit, through not having to spend time worshipping and whatever, but if he does exist, then you run a big risk of ending up in hell, causing you negative infinite value. If you do believe in God, and God doesn’t exist, you perhaps suffer a minor cost, but if God does exist, you likely end up in heaven, netting you infinite positive value. So the expected value of believing in God is infinitely greater than the expected value of not believing, and so you should believe in God.
There are obviously a lot of background assumptions here about beliefs and the nature of God, but let’s just grant those for now. If we do that, then the argument follows pretty straightforwardly from fanaticism: if there is always some finite value that could outweigh a certain gain, for any arbitrarily small probability of attaining it, then obviously infinite positive value could do that as well. So if we want to avoid the argument, we should reject fanaticism.
Yet, I think a similar argument to the one from the St. Petersburg paradox can be used here. Say that instead of God giving you infinite days in heaven, he gives you 1000 days for believing. If this is right, then there is probably some probability of God’s existence, p, where you think it would be rational to try and believe in him. Now we increase the amount to 2000 days. Here p would presumably be lower. As you can guess, we continue increasing the number of days until p reaches your actual credence in theism. Now the number of days needed for this will be finite. But since an infinite heaven is better than a finite one, you would have to accept that increasing the value of an option, while holding its probability fixed, can make it less preferable, which is obviously absurd (IMO). Alternatively, you would need to say that p could never reach your actual credence in theism. For one, this would commit you to all the strange betting behavior discussed in the first part. Secondly, even if p has some asymptotic limit it could reach, you would probably need to have some incredibly diminishingly small credence in theism for this limit to be greater than your actual credence.
This argument (as well as the St. Petersburg one) of course looks a bit question-begging, since it assumes that there is a finite value that could make it worth believing in God here, so this obviously won’t be convincing to a non-fanatic. But the way I see the dialectical thrust of this argument is that is shows how avoiding these paradoxes isn’t without cost. That is, when considering these arguments, we might think we are in a sort of neutral position: We are just looking at the arguments, and when we see that fanaticism leads to these strange consequences, we can at least reject that, and then hold up our hopes for finding a good alternative in the future. But we aren’t in this sort of neutral position—as soon as you reject fanaticism, you commit yourself to accepting what looks like obviously irrational betting behavior in other situations. And when it’s clear that everyone has to bite some very hard bullets, it suddenly looks like fanaticism isn’t at such a big disadvantage. As I already mentioned, I think it’s especially telling that fanaticism only seems to really yield “wrong” results once we consider infinite cases.
As for the wager itself, I think the so-called “many Gods objection” can help us see how we should think about it. There isn’t just one theory that has a chance of being true, which would potentially give you infinite positive utility—there are probably infinite. Thus I think the lesson of Pascal’s wager is that given expected value theory, you should always take the action that has the greatest chance of leading to (the greatest) infinite value. Now, in real life I think you’ll usually have no idea what this is, and so you’re probably best off just thinking in terms of finite value. But if you do have reason to think some action maximizes the chance of infinite value, all things considered, you should probably take that.
Conclusion
I think I can rightly end this post off with a quote from Winning Cash-chill,4 that you’ve probably heard 53271 times before: “fanaticism is the worst option in decision theory, except for all the others.” I’m of course not gonna deny that you need to bite some pretty tough bullets to accept fanaticism. Especially in infinite cases, I think you can get some pretty nasty paradoxes. But any theory will have to bite some big bullets at some point, it seems, and so that isn’t really surprising. And when it comes to finite cases, I think fanaticism is far superior to other options, which gives me some hope that it’s the right choice in infinite cases as well—and in any case, I think it’s a substantial enough conclusion that fanaticism is right in finite cases.
But so what if fanaticism is correct? Well, I think it should make you much more open to donating to charities like the Shrimp Welfare Project, even if you’re very confident that shrimp aren’t conscious. Likewise, it should make you more open to charities that work to decrease the risk of catastrophes. But most importantly, you can impress all the [insert preferred gender] with your ironclad arguments for the right decision theory.
Or if there’s some asymptotic bound, things would start having arbitrarily little value, which I think is basically as implausible.
If 1/p doesn’t come out to a whole number, I trust you have the creativity to imagine a scenario where the probabilities work out the same.
A more extensive and rigorous case is laid out by Hayden Wilkinson in this paper.
I’m sorry, I couldn’t come up with anything better :(
> Bounded values [...] discounting small probabilities
Actually there are other ways to deal with it beyond those two. For example having a probabilistic model: https://bobjacobs.substack.com/p/resolving-moral-uncertainty-with
This theory is about moral uncertainty, but structurally you can just find-and-replace your way into it being about rationality. (One of these days I'm going to actually publish it as a paper instead of leaving it hanging on the internet. I actually now reject my own rejection to my own theory: classic philosopher)
> How can you know that the probability is always lower? Well, I don’t have statistics on it, but I think it’s pretty obvious that in a world where people accepted Pascal’s muggings, it would be the muggers that would earn money, and the muggees that would lose it.
This is more-so descriptive evidence that humans aren't bayesians, and not so much a defense of bayesianism. Maybe, since last I checked, someone has found a way to do the same move within bayesianism, but I couldn't find it a couple years ago. If there isn't, the argument still stands, and might even be *strengthened* depending on whether or not you put any stock into evolutionary arguments.