The "literally all people are so morally bad that God will justly torture them for eternity barring some exceptional intervention but also any normal human cultural practice is presumptively moral" stance is a very odd one.
> but people generally seem unwilling to acknowledge that their purchasing decisions (or lack thereof, when it comes to charity) could be deeply immoral, as they just seem so banal when performed
This is an interesting subject. I've noticed that people - even the most politically unengaged, uncritical layperson who's never taken an ethics class in their life - seem pretty willing to accept and understand your position if you say something like: "I'm not going to buy shoes from Nike because they use sweatshops, and I'm not going to buy products from Nestle because they abuse poor people". They might even sheepishly acknowledge that doing so themselves is a kind of moral weakness. But I think if you said the same about meat, because of animal welfare, there's a lot of people who will speak out against that and stereotype you as a judgy vegan.
It seems to me that there's a ton of cultural baggage around the idea of not eating meat for ethical reasons. (For ethical reasons specifically - people who are vegetarian because they are Buddhist don't face this kind of backlash.) And I think the well of discourse on that topic has been deeply poisoned, even in more academic circles.
All that said, whether it's Nestle or meat, people are very hard to convince to *actually* give it up, no matter how many sheepish acknowledgements they make. There is something lacking in the moral motivation there.
(previous comment is the same text, deleted because I forgot to share it to Notes)
> “The problem is that we do not have direct insight into the moral valence of complicated issues in the real world, but must infer it from moral principles plus empirical data.”
Your argument seems to rule out the possibility of synthetic a priori moral knowledge. Maybe we know a priori that social progress is a net good. Further, social progress being a net good implies that animals deserve comparatively little moral consideration. This may allow us to conclude that non-human animals have much less moral status than humans.
I think this comes down to “one man’s ponens is another man’s tollens.” In your article, you draw an analogy to the thought that scratching your butt does not seem wrong. But if we know that doing so causes thousands of deaths, it is wrong to do so. The anti-effective altruist places more confidence in her judgment that social progress is a net good than that it isn’t. One difference between these cases is that we have uncertainty about cases in the actual world. Given uncertainty, the anti-EA should frame the argument as a plausibility judgment. Which is more plausible: that social progress is overall good or that it is not? You think we shouldn’t answer affirmatively because moral judgments require empirical knowledge. But I think we might invoke the synthetic a priori in response.
I don't think it disallows synthetic a priori moral knowledge in any sense that it is plausible we should think we have. We can know which moral principles are correct, or what the moral valence of some state of affairs or action would be if actual.
What it disallows is a priori knowledge of whether some *actual* state of affairs is good or bad. But obviously we don't have that! If you had literally never looked at the world whatsoever, having no idea what the world is like, whether there is even any life whatsoever, you can't be very sure whether the actual states of affairs are good or bad. Instead you start with a priori moral principles, and then gather empirical evidence regarding the actual state of the world, and then come to a judgement about the actual valence of the world.
Actually, it seems like it's *your* view that makes us get moral knowledge empirically, not mine. After all, on the view you propose, we might start thinking moral principle P is true. But upon looking at the world, we find out that it deems something we thought was good at a first glance to actually be bad, and hence we revise P. That is, we decide our moral principles partly on whether they line up with what empirical evidence we expect to find. That just seems totally wrong!
"Yana Chidiebele has been captured by an evil scientist who has burdened her with a terrible curse: With his incredible science-powers, he has made a grisly machine that detects every time Yana scratches her left butt cheek, and then horribly murders a million sweet old ladies and cute babies! Even though scratching your butt usually doesn’t seem like a very bad thing, due to these contingent facts about Yana’s imaginary world, it turns out that it is an extremely terrible thing to do. I hope you agree that, in this imaginary world, it would be wrong for Yana to scratch her left butt cheek."
But would it then be your fault that they get killed, or the evil scientist's?
Think of it this way: If someone is shooting at you from a human shield, and you shoot and kill this human shield, is it your fault that they're dead, or the person who is shooting at you?
Similarly, expecting people not to scratch their left buttcheck for their entire life is just implausible. It's similar to the argument that pro-choicers make about sex: That expecting people to abstain from PIV sex with all fertile and potentially fertile people of their entire lives (or until menopause) is completely implausible, and that therefore abortion should be acceptable even if the fetus could suffer, though then we'd also have a duty to minimize fetal suffering as a part of the abortion process.
Similarly, God could have made all of us vegans, but he didn't. He could have also made all animals vegans, but he didn't. I don't think that there is any human civilization that has actually evolved to be vegan. There are some Indian castes that might be vegetarian, I think, but not vegan. This raises the question to what extent humans are biologically suited for veganism without meat substitutes like artificial lab-grown meat, which BTW I fully support along with banning all factory farming.
The "literally all people are so morally bad that God will justly torture them for eternity barring some exceptional intervention but also any normal human cultural practice is presumptively moral" stance is a very odd one.
> but people generally seem unwilling to acknowledge that their purchasing decisions (or lack thereof, when it comes to charity) could be deeply immoral, as they just seem so banal when performed
This is an interesting subject. I've noticed that people - even the most politically unengaged, uncritical layperson who's never taken an ethics class in their life - seem pretty willing to accept and understand your position if you say something like: "I'm not going to buy shoes from Nike because they use sweatshops, and I'm not going to buy products from Nestle because they abuse poor people". They might even sheepishly acknowledge that doing so themselves is a kind of moral weakness. But I think if you said the same about meat, because of animal welfare, there's a lot of people who will speak out against that and stereotype you as a judgy vegan.
It seems to me that there's a ton of cultural baggage around the idea of not eating meat for ethical reasons. (For ethical reasons specifically - people who are vegetarian because they are Buddhist don't face this kind of backlash.) And I think the well of discourse on that topic has been deeply poisoned, even in more academic circles.
All that said, whether it's Nestle or meat, people are very hard to convince to *actually* give it up, no matter how many sheepish acknowledgements they make. There is something lacking in the moral motivation there.
(previous comment is the same text, deleted because I forgot to share it to Notes)
> “The problem is that we do not have direct insight into the moral valence of complicated issues in the real world, but must infer it from moral principles plus empirical data.”
Your argument seems to rule out the possibility of synthetic a priori moral knowledge. Maybe we know a priori that social progress is a net good. Further, social progress being a net good implies that animals deserve comparatively little moral consideration. This may allow us to conclude that non-human animals have much less moral status than humans.
I think this comes down to “one man’s ponens is another man’s tollens.” In your article, you draw an analogy to the thought that scratching your butt does not seem wrong. But if we know that doing so causes thousands of deaths, it is wrong to do so. The anti-effective altruist places more confidence in her judgment that social progress is a net good than that it isn’t. One difference between these cases is that we have uncertainty about cases in the actual world. Given uncertainty, the anti-EA should frame the argument as a plausibility judgment. Which is more plausible: that social progress is overall good or that it is not? You think we shouldn’t answer affirmatively because moral judgments require empirical knowledge. But I think we might invoke the synthetic a priori in response.
I don't think it disallows synthetic a priori moral knowledge in any sense that it is plausible we should think we have. We can know which moral principles are correct, or what the moral valence of some state of affairs or action would be if actual.
What it disallows is a priori knowledge of whether some *actual* state of affairs is good or bad. But obviously we don't have that! If you had literally never looked at the world whatsoever, having no idea what the world is like, whether there is even any life whatsoever, you can't be very sure whether the actual states of affairs are good or bad. Instead you start with a priori moral principles, and then gather empirical evidence regarding the actual state of the world, and then come to a judgement about the actual valence of the world.
Actually, it seems like it's *your* view that makes us get moral knowledge empirically, not mine. After all, on the view you propose, we might start thinking moral principle P is true. But upon looking at the world, we find out that it deems something we thought was good at a first glance to actually be bad, and hence we revise P. That is, we decide our moral principles partly on whether they line up with what empirical evidence we expect to find. That just seems totally wrong!
"Yana Chidiebele has been captured by an evil scientist who has burdened her with a terrible curse: With his incredible science-powers, he has made a grisly machine that detects every time Yana scratches her left butt cheek, and then horribly murders a million sweet old ladies and cute babies! Even though scratching your butt usually doesn’t seem like a very bad thing, due to these contingent facts about Yana’s imaginary world, it turns out that it is an extremely terrible thing to do. I hope you agree that, in this imaginary world, it would be wrong for Yana to scratch her left butt cheek."
But would it then be your fault that they get killed, or the evil scientist's?
Think of it this way: If someone is shooting at you from a human shield, and you shoot and kill this human shield, is it your fault that they're dead, or the person who is shooting at you?
Similarly, expecting people not to scratch their left buttcheck for their entire life is just implausible. It's similar to the argument that pro-choicers make about sex: That expecting people to abstain from PIV sex with all fertile and potentially fertile people of their entire lives (or until menopause) is completely implausible, and that therefore abortion should be acceptable even if the fetus could suffer, though then we'd also have a duty to minimize fetal suffering as a part of the abortion process.
Similarly, God could have made all of us vegans, but he didn't. He could have also made all animals vegans, but he didn't. I don't think that there is any human civilization that has actually evolved to be vegan. There are some Indian castes that might be vegetarian, I think, but not vegan. This raises the question to what extent humans are biologically suited for veganism without meat substitutes like artificial lab-grown meat, which BTW I fully support along with banning all factory farming.