This theory is about moral uncertainty, but structurally you can just find-and-replace your way into it being about rationality. (One of these days I'm going to actually publish it as a paper instead of leaving it hanging on the internet. I actually now reject my own rejection to my own theory: classic philosopher)
> How can you know that the probability is always lower? Well, I don’t have statistics on it, but I think it’s pretty obvious that in a world where people accepted Pascal’s muggings, it would be the muggers that would earn money, and the muggees that would lose it.
This is more-so descriptive evidence that humans aren't bayesians, and not so much a defense of bayesianism. Maybe, since last I checked, someone has found a way to do the same move within bayesianism, but I couldn't find it a couple years ago. If there isn't, the argument still stands, and might even be *strengthened* depending on whether or not you put any stock into evolutionary arguments.
That was a super interesting read! I'm not sure I quite see how you would translate the approach into a decision theory about descriptive uncertainty. Is the idea that instead of representing theories with points, you represent possible outcomes as points, and weigh the "center" point towards an outcome in proportion with its probability? If that is the case then it just seems like it would be equivalent to expected value theory, or expected value theory except that you sometimes randomly take another action (if you allow for random deviation from the center).
But even if it ends up being different, it then just seems like my general argument against fanaticism would apply, though maybe I'm missing something?
As for the second point, I'm not quite sure I understand what you're saying. What I was arguing was that the fact that iterating pascals muggings would tend to win the mugger money, if the muggee accepts the bargain, just is evidence that the expected value of taking the bargain is negative.
On an unrelated sidenote, I have tentatively developed an approach to moral uncertainty (which I have found out is pretty similar to, though not equivalent to, Ted Lockhart's view). I think it avoids a lot of problems with fanaticism (which I think is a problem for uncertainty about moral theories) intertheoretic comparisons of value, individuation of theories, etc. The main problem is that it requires that theories can assign scalar values to options, which can make it difficult to account for theories that only assign ordinal values to choices, and which use terms like "permissible" and "impermissible"--though I think that those issues are solvable, and they're at least also issues for the CR view you develop.
I formulate it here (only the "my view" section is really relevant), though I also don't just want to keep throwing links to old posts in your face:
> If that is the case then it just seems like it would be equivalent to expected value theory
Yes, if there's one option with infinite value Convex randomization will (functionally) always pick that. Or as I phrased it in the post.
> CR has a problem with fanaticism, although to a lesser extent than MEC. If a theory says option A has 10^1000 choice-worthiness, R won't always land on option A but it will be biased towards it. Similarly, if a theory says option A has an infinite amount of choice worthiness, R won't literally always land on A, but will practically always land on A.
However, the same is not true for Runoff randomization, and sortition. They can withstand large payoffs, and even infinite payoffs. Now, I rejected these in the post because I wasn't a fan of using credences/bayesianism. However, since writing that I have developed some solutions to problems with bayesianism https://bobjacobs.substack.com/p/solutions-to-problems-with-bayesianism (the biggest of these is similar to infra-bayesianism Vanessa Kosoy, which I swear I developed independently and my solution can handle some extra problems). So now I like both of them again, but conversely started to like CR less since I realized it's non-monotonic. (Which is not as big of a deal in individuals as it is in groups, but still. And yes, I am constantly see-sawing between whether my ideas are the shining beacons of brilliance born from the mind of an undiscovered genius who will set the intellectual world alight, or peepeepoopoo stoopid ideas born from the mind of a peepeepoopoo stoopid man).
This is also technically not the only solution to fanaticism I’ve developed. I've one that's interesting because not only does it not need cardinal evaluations, it doesn't even need *ordinal* evaluations. However, I’ve not written this down since the consequentialists think in terms of cardinality, the deontologists think in terms of ordinality, and the virtue ethicists (who reject both and would theoretically appreciate it) are not interested in formalisms, moral uncertainty, or substack, so there’s literally no audience for it. If you happened to know a virtue ethicist that’s into math, normative uncertainty and substack… run! that’s a disguised alien!
> What I was arguing was that the fact that iterating pascals muggings would tend to win the mugger money, if the muggee accepts the bargain, just is evidence that the expected value of taking the bargain is negative.
Ah yes, this is called a ‘dutch book’ argument, which btw continues the larger trend of the English language naming every other nasty concept after the dutch: dutch bookie, double dutch, dutch disease… and as a dutchie I’d have to ask: *The fuck did we do?!* ⁽ ᴸᵒʷᵉʳ ʸᵒᵘʳ ʰᵃⁿᵈ ᴵⁿᵈᵒⁿᵉˢᶦᵃ, ᴵ ʷᵃˢⁿ'ᵗ ᵗᵃˡᵏᶦⁿᵍ ᵗᵒ ʸᵒᵘ ⁾
This indeed an excellent argument against taking the bargain, however, I was talking about the updating:
> the more money the person offers you, the lower the probability they’re telling the truth, such that the probability is always lower than what it would need to be for it to be preferable to accept.
I’m not a statistician so I don’t know if that’s actually true, but according to my source (a half remembered podcasts I listened to several years ago, because I’m a proper academic) the updating doesn’t keep up with the increase in money, and the reason that humans don’t fall for it is because we have another mechanism. If you don't find my source convincing for some unfathomable reason, I will begrudgingly accept you questioning my academic excellence.
> though I think that those issues are solvable, and they're at least also issues for the CR view you develop
Yes, but not for runoff and sortition
> I formulate it here (only the "my view" section is really relevant), though I also don't just want to keep throwing links to old posts in your face
Hmm, I guess it just seems to me like the extent to which your view doesn't fit with fanaticism, it just chooses inferiorly (though obviously a fanatic like me would say that). I think there will always be a correct option (or several tied) for you to choose in some situation of uncertainty (at least given well-defined probabilities etc. etc.), and so the correct theory should just choose that each time. So either your view will sometimes randomly choose the wrong option, or it will be succeptible to the general argument against non-fanatical theories (though I may not be understanding your views well enough).
As for the pascals mugging case, I'm a bit skeptical that you could find some general measure for how to update, since it will depend on the facial expression, tone of voice, and so on, of the mugger. So I must sadly report that I'm not sure I believe your resource in showing that the expected value would at some point be positive:(
This is just reiterating, but it just seems clear that the expected value of taking the mugger's offer would (almost) always be negative. Why? Because if you went around taking the offers of muggers, the amount of money you would loose would tend towards infinity as the number of muggings tends towards infinity. And if this is the case, the it just must be the case that your credence should decrease such that the expected value is always negative.
Some interesting posts that you link, I'll have to look through those at some point!
> Bounded values [...] discounting small probabilities
Actually there are other ways to deal with it beyond those two. For example having a probabilistic model: https://bobjacobs.substack.com/p/resolving-moral-uncertainty-with
This theory is about moral uncertainty, but structurally you can just find-and-replace your way into it being about rationality. (One of these days I'm going to actually publish it as a paper instead of leaving it hanging on the internet. I actually now reject my own rejection to my own theory: classic philosopher)
> How can you know that the probability is always lower? Well, I don’t have statistics on it, but I think it’s pretty obvious that in a world where people accepted Pascal’s muggings, it would be the muggers that would earn money, and the muggees that would lose it.
This is more-so descriptive evidence that humans aren't bayesians, and not so much a defense of bayesianism. Maybe, since last I checked, someone has found a way to do the same move within bayesianism, but I couldn't find it a couple years ago. If there isn't, the argument still stands, and might even be *strengthened* depending on whether or not you put any stock into evolutionary arguments.
That was a super interesting read! I'm not sure I quite see how you would translate the approach into a decision theory about descriptive uncertainty. Is the idea that instead of representing theories with points, you represent possible outcomes as points, and weigh the "center" point towards an outcome in proportion with its probability? If that is the case then it just seems like it would be equivalent to expected value theory, or expected value theory except that you sometimes randomly take another action (if you allow for random deviation from the center).
But even if it ends up being different, it then just seems like my general argument against fanaticism would apply, though maybe I'm missing something?
As for the second point, I'm not quite sure I understand what you're saying. What I was arguing was that the fact that iterating pascals muggings would tend to win the mugger money, if the muggee accepts the bargain, just is evidence that the expected value of taking the bargain is negative.
On an unrelated sidenote, I have tentatively developed an approach to moral uncertainty (which I have found out is pretty similar to, though not equivalent to, Ted Lockhart's view). I think it avoids a lot of problems with fanaticism (which I think is a problem for uncertainty about moral theories) intertheoretic comparisons of value, individuation of theories, etc. The main problem is that it requires that theories can assign scalar values to options, which can make it difficult to account for theories that only assign ordinal values to choices, and which use terms like "permissible" and "impermissible"--though I think that those issues are solvable, and they're at least also issues for the CR view you develop.
I formulate it here (only the "my view" section is really relevant), though I also don't just want to keep throwing links to old posts in your face:
https://open.substack.com/pub/wonderandaporia/p/contra-benthams-bulldog-on-moral?utm_source=share&utm_medium=android&r=1l11lq
> If that is the case then it just seems like it would be equivalent to expected value theory
Yes, if there's one option with infinite value Convex randomization will (functionally) always pick that. Or as I phrased it in the post.
> CR has a problem with fanaticism, although to a lesser extent than MEC. If a theory says option A has 10^1000 choice-worthiness, R won't always land on option A but it will be biased towards it. Similarly, if a theory says option A has an infinite amount of choice worthiness, R won't literally always land on A, but will practically always land on A.
However, the same is not true for Runoff randomization, and sortition. They can withstand large payoffs, and even infinite payoffs. Now, I rejected these in the post because I wasn't a fan of using credences/bayesianism. However, since writing that I have developed some solutions to problems with bayesianism https://bobjacobs.substack.com/p/solutions-to-problems-with-bayesianism (the biggest of these is similar to infra-bayesianism Vanessa Kosoy, which I swear I developed independently and my solution can handle some extra problems). So now I like both of them again, but conversely started to like CR less since I realized it's non-monotonic. (Which is not as big of a deal in individuals as it is in groups, but still. And yes, I am constantly see-sawing between whether my ideas are the shining beacons of brilliance born from the mind of an undiscovered genius who will set the intellectual world alight, or peepeepoopoo stoopid ideas born from the mind of a peepeepoopoo stoopid man).
This is also technically not the only solution to fanaticism I’ve developed. I've one that's interesting because not only does it not need cardinal evaluations, it doesn't even need *ordinal* evaluations. However, I’ve not written this down since the consequentialists think in terms of cardinality, the deontologists think in terms of ordinality, and the virtue ethicists (who reject both and would theoretically appreciate it) are not interested in formalisms, moral uncertainty, or substack, so there’s literally no audience for it. If you happened to know a virtue ethicist that’s into math, normative uncertainty and substack… run! that’s a disguised alien!
> What I was arguing was that the fact that iterating pascals muggings would tend to win the mugger money, if the muggee accepts the bargain, just is evidence that the expected value of taking the bargain is negative.
Ah yes, this is called a ‘dutch book’ argument, which btw continues the larger trend of the English language naming every other nasty concept after the dutch: dutch bookie, double dutch, dutch disease… and as a dutchie I’d have to ask: *The fuck did we do?!* ⁽ ᴸᵒʷᵉʳ ʸᵒᵘʳ ʰᵃⁿᵈ ᴵⁿᵈᵒⁿᵉˢᶦᵃ, ᴵ ʷᵃˢⁿ'ᵗ ᵗᵃˡᵏᶦⁿᵍ ᵗᵒ ʸᵒᵘ ⁾
This indeed an excellent argument against taking the bargain, however, I was talking about the updating:
> the more money the person offers you, the lower the probability they’re telling the truth, such that the probability is always lower than what it would need to be for it to be preferable to accept.
I’m not a statistician so I don’t know if that’s actually true, but according to my source (a half remembered podcasts I listened to several years ago, because I’m a proper academic) the updating doesn’t keep up with the increase in money, and the reason that humans don’t fall for it is because we have another mechanism. If you don't find my source convincing for some unfathomable reason, I will begrudgingly accept you questioning my academic excellence.
> though I think that those issues are solvable, and they're at least also issues for the CR view you develop
Yes, but not for runoff and sortition
> I formulate it here (only the "my view" section is really relevant), though I also don't just want to keep throwing links to old posts in your face
No problem I’ll just keep throwing links right back. Hey kid you want to hear my obscure theory on the numeral system in Homer’s Odyssey? https://substack.com/@bobjacobs/note/c-93863940?
Hmm, I guess it just seems to me like the extent to which your view doesn't fit with fanaticism, it just chooses inferiorly (though obviously a fanatic like me would say that). I think there will always be a correct option (or several tied) for you to choose in some situation of uncertainty (at least given well-defined probabilities etc. etc.), and so the correct theory should just choose that each time. So either your view will sometimes randomly choose the wrong option, or it will be succeptible to the general argument against non-fanatical theories (though I may not be understanding your views well enough).
As for the pascals mugging case, I'm a bit skeptical that you could find some general measure for how to update, since it will depend on the facial expression, tone of voice, and so on, of the mugger. So I must sadly report that I'm not sure I believe your resource in showing that the expected value would at some point be positive:(
This is just reiterating, but it just seems clear that the expected value of taking the mugger's offer would (almost) always be negative. Why? Because if you went around taking the offers of muggers, the amount of money you would loose would tend towards infinity as the number of muggings tends towards infinity. And if this is the case, the it just must be the case that your credence should decrease such that the expected value is always negative.
Some interesting posts that you link, I'll have to look through those at some point!