I like your framing of the setup of the darwinian dilemma, but I don’t think your objection holds much weight...
First of all, epistemic normatively (the idea that we stance independently must believe certain things about the truth) isn’t really an idea that many people take seriously and is therefore worth dropping. We don’t need to think that one needs to believe true things stance independently for one to be motivated to deny falsehoods. (Note: this is a form of a companions in guilt argument, but I think it at least works better with respect to deliberative indespensibility -- see enoch: though, I forget where)
One way to think this even more is if you are a type of pragmatist where having true beliefs is only important as it relates to getting more of the stuff you like (preferences, hedonic states, whatever).
Secondly, with respect to truth in general (though not epistemic normativity, this seems like an important clarification), it does seem like evolution would select for us believing (some type of) true things as this is relevant for survival (therefore, we would likely have access to at least some kinds of facts about the truth). This would explain, however, why we know about apples (as a higher level object that we can interact with, for instance, and not particles (which, according to many, are more true in some sense). This would not, for instance, apply to the moral facts (only, imo, under non tautological definitions)
I actually explicitly distance myself from framing the argument in terms of epistemic normativity (see section 3 paragraph 3), hence why I frame the argument in terms of probabilities of different theories.
I also think I adress your second objection later in section 3. While evolution *does* select for certain truths, it doesn't select for the truth of D, since D cannot have made a difference to evolutionary fitness/history. This is compatible with our still knowing about apples.
If that's the case where would self-defeat or skepticism come into the picture, then?
A more general concern I'd have is that natural selection can select for belief-forming processes or learning mechanisms which we can then employ to devise epistemic principles. It doesn't have to directly select for particular beliefs or epistemic principles. For instance, we can figure out how to build boats, but that doesn't mean we have genes for boat building. The same can be true of epistemic and moral practices. The issue for me would be that epistemic practices permit feedback and corroboration from the world; it's not clear if or how moral beliefs do so.
The skepticism comes from us being unable to say that less parsimonious theories are less likely. If this is right, then I can simply accept that there is no connection between the moral facts and my beliefs, and still strongly believe moral realism, and there would be no reason from my wanting true beliefs to think I shouldn't do this. So the Darwinian dilemma itself would no longer successfully undermine moral realism.
As for the suggestion you give, I think that might work for many types of beliefs, such as mathematical beliefs. But I'm not sure why we should think it would work for a principle of parsimony. After all, as I say, we should expect creatures created by evolution to accept some principle of parsimony. At the same time, the truth or falsity of the principle could have no influence on evolutionary history, since whether or not some extra entity exists that has no effect on our observations, in principle couldn't affect our evolutionary history (since such an effect could be observed). So even if it's false that a theory is less likely if it postulates more entities, we would still believe the principle of parsimony that we actually do.
It also seems like we come to understand certain things outside of evolution (I.e. modern science). It seems we can derive the general idea that simpler things are epistemically better from there (or on similar Bayesian terms).
Well, if we no longer have reason to favor simpler theories over more complex ones, we can just stuff whatever we want into our theories, so long as they are consistent with our observations, and the more complex theory would be no less likely. I think this would make a mess of any attempt of scientific theorizing.
This seems straightforwardly wrong, or at least to be vastly underestimating the resources available to an anti-realist position.
Briefly: Scientific reasoning doesn't need any general principle that "simpler theories are more likely to be true," and there isn't any good reason to accept such a principle. (It could easily be wrong, for precisely the reasons you have outlined here!) Science just needs to prefer simpler theories. Pragmatic considerations offer an entirely adequate justification for such a preference. (e.g. simpler theories impose less cognitive overhead). "Truth" doesn't need to be in the picture at all.
Sure, if you're a scientific anti-realist. I'm happy to accept that if you don't conceive of science as an attempt to develop true theories, but just useful theories, then you get the principle of parsimony very easily.
But even then, for any domain that *does* attempt to get at truth, the principle of parsimony is no longer safe. And this is the case in the debate between realists and anti-realists in metaethics (or at least, I think I'm trying to get at the true view there, and think there is one, which is all I need for the argument to be successful for me). This is crucial because this is the place where the Darwinian dilemma--an argument that relies on parsimony--is made. Thus the Darwinian dilemma undermines itself, and it's just an orthogonal question whether it undermines science too.
Your argument for this point is rather terse, and it's not clear to me exactly how it is supposed to work. In particular, it's not clear what sort of modal machinery you're employing.
Following standard assumptions regarding metaphysical modality, P=N entails [](P=N), so the claim that our knowledge of P=N is insensitive (i.e. "if P had been identical to M rather than N, this would have made no causal difference to our beliefs about P") seems to involve a counterpossible conditional, and it's not obvious why a sensitivity requirement for moral knowledge would require such knowledge to be robust to counterpossible scenarios. (By analogy: we can have knowledge of mathematics, even if we would have come to the same mathematical beliefs in an [impossible] world where 2+2=5.)
Alternatively, and perhaps more clearly: What prevents your argument in this paragraph from being a general argument for the insensitivity of all of our natural knowledge?
(Williamson, IIRC, dislikes the sensitivity criterion for precisely this reason: he thinks it leads too easily to a very broad skepticism.)
Interesting point, though I simply think it's a question of epistemic possibility and probability, rather than metaphysical. After all, non-naturalism, if true, is plausibly also necessarily true, and the moral facts are necessarily what they are. So if that gets the naturalist out of the argument, it would also get the non-naturalist out of the argument.
As for whether it leads to skepticism about all natural knowledge, I guess I'm not quite sure what you mean here. Could you clarify what you mean by natural knowledge? If its knowledge of natural kinds or something, then I don't think natural kinds exist, so I don't think it would be a problem for me.
"What feature of epistemic principles makes these graspable, which doesn’t also allow moral truths to be grasped? "
The problem with the "evolutionary debunking" argument is its belief that morality is something physical that our own physicality could grasp only by coincidence. But morality isn't physical, it's conceptual. Specifically, morality is those set of principles that parties could not reasonably reject, under conditions where they can only offer reasons, rather than express power. They are graspable through our comprehension and acceptable of reasons that would guide our conduct to one another. We could even look to the market for guidance and see what terms people accept under fairly equal bargaining positions, to get a sense of the actual principles these hypothetical parties would agree to.
I've linked some posts below that may be helpful. I've found morality a bit stale, as I view the matter as settled. But I'm happy to dive back into it if you have any good counters.
Sorry for the late reply! I read the articles linked, and it's definitely interesting, though I think I disagree with almost everything you say, lol (but that's besides the point).
I don't think your position escapes the Darwinian dilemma. The problem is that you still need to know that what is morally right is what parties cannot reasonably reject. But a world where this isn't the case would look exactly the same, so you have no justification for thinking it's the case.
I suspect you will object that it's just sort of trivial because *if* you want to respect people's desires etc. etc. (whatever the justification for the moral facts being what you think they are), *then* you should order society such that parties would rationally agree to it being ordered that way. The problem is that an anti-realist can say the exact same thing. What you need for your view is also that it's *objectively correct* that we should respect people's desires, etc. etc., rather than, say, do what is against everyone's desires. But this claim isn't trivial in the sense you want it to be.
An anti-realist can still accept that hypothetical imperatives are objectively correct, they just can't say that some antecedent in the hypothetical imperative is to be preferred over another antecedent. This is exactly what moral realism requires, but one antecedent being more correct than another is a belief that the Darwinian dilemma defeats.
I suspect I might have misunderstood your position to some degree, so tell me why I'm wrong.
Thanks, I'll address each of your points below for clarity
>>The problem is that you still need to know that what is morally right is what parties cannot reasonably reject. But a world where this isn't the case would look exactly the same, so you have no justification for thinking it's the case.
The agreement is hypothetical - a counterfactual, purely conceptual. We use basic counter-factuals all the time to understand truth do we not? Yet the counter-factual doesn't happen in our world, but one where we imagine it to be - a fictional world that will be our guide in this one. And if there is *any* agreement that holds *independent* moral weight, it would be the "reasonable agreement behind the veil of ignorance" world (but I'm open to hearing about other, more just, possible worlds). So the principles created in that world would be the universal principles of morality.
Just as we can imagine the concept of a perfect circle without one ever having been drawn, we can imagine the concept of a reasonable agreement, despite it never having been created(and likely never will)
>>An anti-realist can still accept that hypothetical imperatives are objectively correct, they just can't say that some antecedent in the hypothetical imperative is to be preferred over another antecedent. This is exactly what moral realism requires, but one antecedent being more correct than another is a belief that the Darwinian dilemma defeats.
I agree 100%. No antecedents are more correct than any other. You would only value morality to the extent you value freedom, and value reason, and value reason over freedom. Reason, therefore, can justifiably bind freedom. "Morality" are those sets of principles that can bind freedom. BUT the anti-realist is correct in that if you don't have those values of freedom and reason, then morality is N/A. You gotta care about reason and freedom in that order to get morality.
See the below link on this point. Even just asking "why should I be moral" is accepting reason and freedom. in that order
As that was a lot of typing, and as you may be able to tell by my post frequency, I don't like a lot of typing, I'd prefer not to type another long response. So if you have any further questions, please chat me and we can discuss live. But I'm fine with receiving only a written response here and noodling on it on my own.
To your first point, I think you might have misunderstood my objection. I am not here objecting to a hypothetical agreement being a ground for morality/what the content of morality would be IF moral realism is true. Rather my objection is this:
For your theory to be a realist theory it doesn't just need to claim "a society following such and such rules will best promote freedom and reason" or whatever. Rather you need the further claim that "promoting freedom and reason is morally good."
The first claim might very well be true--that is just a first-order moral issue. But that is irrelevant to realism/anti-realism. What you need is the second claim. But consider what the world would look like if the second claim were false, or if "hindering freedom and reason is morally good" were true. Well, the would would look exactly the same, since our evolutionary history and thinking processes would not be affected by it either way. Thus the debunking argument goes through.
As for your second point, it might make me worry that you are not actually a realist at all. After all, if no hypothetical imperative is more correct than another, then what do we mean when we say that there are objectively correct moral principles for action?
Although I am sort of sympathetic to a moral realism with only hypothetical imperatives. But for such a theory to work, you still need a hypothetical imperative of the form "if you want to act morally, then you should do X." But now the problem is knowing what "X" is. What you need here is the claim "doing X is morally right." But this is simply the problem from before--you have no way of knowing what the content of X is, since evolutionary history would have been exactly the same, regardless of what X is.
I think it would definitely be fun to chat about this at some point!
This has nothing to do with this article (sorry), but how do we make sense of moral subjectivism (the view that some moral propositions are true, though their truth is stance-dependent), especially as conceptually distinct from error theory? How can a proposition be stance-dependently true?
To take an example for which subjectivism is supposed to be more plausible, let X be: "vanilla ice cream is better than chocolate." It is true that I *prefer* vanilla, but that doesn't mean X is true *for me*, unless we defne "better" in such a way that X is a true description of my preferences. But this is just error theory: moral propositions are false, but people have preferences across states of affairs, and there can exist true propositions describing said preferences.
Or another example: Y = "Z is funny." When someone says Y is subjectively true, it seems like they're saying something like: Z is disposed to cause me to laugh or become amused (or perhaps Z is disposed to cause most people to laugh or become amused). Again, this is just error theory about funniness.
Unless subjectivism is just error theory (in which case why do we need a distinct concept?), it seems just as incoherent to me as general truth relativism.
That's an interesting question. It's definitely hard to make sense of, though one way could perhaps be to think that moral terms are indexicals.
For example, when I say "murder is wrong" it would be like saying "I like ice cream". This is only true when it alligns with facts from the perspective of the person saying it. "Murder is wrong" would then be true IFF the subjectivist conditions for moral truth are satisfied by the person saying/thinking it (or the group they're a part of or whatever).
In that sense it would be very similar to moral naturalism, but the difference is that rather than moral properties being identical with certain objective properties, they would be identical with certain indexical properties. So saying something is "wrong" wouldn't even make sense from "the view from nowhere," just like saying something is "here" doesn't make sense from that perspective.
The moral facts could of course still be described using stance independent language. You might say that "murder is wrong from Silas' perspective" or "Silas disapproves of murder". But that wouldn't be the same as saying "murder is wrong," just like saying "the ball is here" is not the same as saying "the ball is where Silas is".
That's at least my attempt at a steelman, but I'm not sure this is what a subjectivist would want to say.
This makes sense, though I still feel like the error theory neatly accommodates this. The error theorist will say: the stance-independent proposition "murder is wrong" is false, while the indexical statement "murder is wrong" is true, so long as it accords with the true preferences of the person/group saying/thinking it.
Maybe subjectivism requires conditions for moral truth that must be satisfied by the person/group, but are not congruent to her/their preferences. But I have no idea what those conditions could be.
Or maybe we could say that subjectivism and error theory (and non-cognitivism actually) all describe the same language-neutral metaethical beliefs, and differ only in their linguistic content: they have differnet descriptions of what a statement like "murder is wrong" expresses.
I don't think an error theorist would be able to capture the subjectivist notion of wrongness. A subjectivist would want to say that there really *is* a fact of the matter as to whether some things are right or wrong, it simply depends on perspectives.
The error theorist can of course say that there are truths of the form "murder satisfies [insert subjectivist conditions for wrongness] for me". But that is not the same as saying that murder is actually wrong (from my POV).
It might help to make an analogy to naturalist realism. The error theory can also say "murder satisfies [insert naturalist conditions for wrongness]". But of course error theory isn't equivalent to naturalist realism. This is because the naturalist thinks that that also makes murder *really wrong* in some substantive way. I think a parallel story holds for subjectivism.
I do think error theory and non-cognitivism do have the same picture of the world, and only differ in linguistic questions, but I don't think the same is true for subjectivism.
I simply can't grasp how if there is a fact of the matter about whether X obtains, it could depend on perspectives; on this reading, moral subjectivism seems like a special case of truth relativism, which I cannot make sense of.
On the other hand, using your analogy to naturalist realism - if murder is *really wrong* because both I and it satisfy [insert conditions about my subjective attitudes towards it], then this is just a special form of moral realism!
Well I suppose they would say that whether X obtains depends on perspectives because X is an indexical. It's not very strange that whether "I am happy" or "it's raining here" can depend on perspectives, and I don't think it requires truth relativism.
When I say "murder is wrong," on subjectivism I'm saying something like "I disapprove of murder." This clearly depends on who is saying it, and is true or false. The subjectivist would then make the further claim that it's *really true* that when I disapprove of something it's actually wrong. But again that just means that "wrong" is an indexical. Murder is *really wrong* from my perspective, and *really permissible* from Jeffery Dahmer's perspective.
But I agree that this theory is sort of mysterious, and I really think it is gonna depend on what you say about indexicals (something I haven't read/thought very much about).
Let's say we think all normative realism is debunked by evolutionary considerations. How does that lead to a skeptical scenario? All it would entail is that we should reject normative realism, but rejecting normative realism doesn't strike me as leading to self-defeat or to skepticism. What am I missing?
I like your framing of the setup of the darwinian dilemma, but I don’t think your objection holds much weight...
First of all, epistemic normatively (the idea that we stance independently must believe certain things about the truth) isn’t really an idea that many people take seriously and is therefore worth dropping. We don’t need to think that one needs to believe true things stance independently for one to be motivated to deny falsehoods. (Note: this is a form of a companions in guilt argument, but I think it at least works better with respect to deliberative indespensibility -- see enoch: though, I forget where)
One way to think this even more is if you are a type of pragmatist where having true beliefs is only important as it relates to getting more of the stuff you like (preferences, hedonic states, whatever).
Secondly, with respect to truth in general (though not epistemic normativity, this seems like an important clarification), it does seem like evolution would select for us believing (some type of) true things as this is relevant for survival (therefore, we would likely have access to at least some kinds of facts about the truth). This would explain, however, why we know about apples (as a higher level object that we can interact with, for instance, and not particles (which, according to many, are more true in some sense). This would not, for instance, apply to the moral facts (only, imo, under non tautological definitions)
I actually explicitly distance myself from framing the argument in terms of epistemic normativity (see section 3 paragraph 3), hence why I frame the argument in terms of probabilities of different theories.
I also think I adress your second objection later in section 3. While evolution *does* select for certain truths, it doesn't select for the truth of D, since D cannot have made a difference to evolutionary fitness/history. This is compatible with our still knowing about apples.
If that's the case where would self-defeat or skepticism come into the picture, then?
A more general concern I'd have is that natural selection can select for belief-forming processes or learning mechanisms which we can then employ to devise epistemic principles. It doesn't have to directly select for particular beliefs or epistemic principles. For instance, we can figure out how to build boats, but that doesn't mean we have genes for boat building. The same can be true of epistemic and moral practices. The issue for me would be that epistemic practices permit feedback and corroboration from the world; it's not clear if or how moral beliefs do so.
The skepticism comes from us being unable to say that less parsimonious theories are less likely. If this is right, then I can simply accept that there is no connection between the moral facts and my beliefs, and still strongly believe moral realism, and there would be no reason from my wanting true beliefs to think I shouldn't do this. So the Darwinian dilemma itself would no longer successfully undermine moral realism.
As for the suggestion you give, I think that might work for many types of beliefs, such as mathematical beliefs. But I'm not sure why we should think it would work for a principle of parsimony. After all, as I say, we should expect creatures created by evolution to accept some principle of parsimony. At the same time, the truth or falsity of the principle could have no influence on evolutionary history, since whether or not some extra entity exists that has no effect on our observations, in principle couldn't affect our evolutionary history (since such an effect could be observed). So even if it's false that a theory is less likely if it postulates more entities, we would still believe the principle of parsimony that we actually do.
It also seems like we come to understand certain things outside of evolution (I.e. modern science). It seems we can derive the general idea that simpler things are epistemically better from there (or on similar Bayesian terms).
My argument, if successful, would show that if Darwinian dilemmas work, they also undermine scientific reasoning.
Undermine scientific reasoning how, exactly?
Well, if we no longer have reason to favor simpler theories over more complex ones, we can just stuff whatever we want into our theories, so long as they are consistent with our observations, and the more complex theory would be no less likely. I think this would make a mess of any attempt of scientific theorizing.
This seems straightforwardly wrong, or at least to be vastly underestimating the resources available to an anti-realist position.
Briefly: Scientific reasoning doesn't need any general principle that "simpler theories are more likely to be true," and there isn't any good reason to accept such a principle. (It could easily be wrong, for precisely the reasons you have outlined here!) Science just needs to prefer simpler theories. Pragmatic considerations offer an entirely adequate justification for such a preference. (e.g. simpler theories impose less cognitive overhead). "Truth" doesn't need to be in the picture at all.
Sure, if you're a scientific anti-realist. I'm happy to accept that if you don't conceive of science as an attempt to develop true theories, but just useful theories, then you get the principle of parsimony very easily.
But even then, for any domain that *does* attempt to get at truth, the principle of parsimony is no longer safe. And this is the case in the debate between realists and anti-realists in metaethics (or at least, I think I'm trying to get at the true view there, and think there is one, which is all I need for the argument to be successful for me). This is crucial because this is the place where the Darwinian dilemma--an argument that relies on parsimony--is made. Thus the Darwinian dilemma undermines itself, and it's just an orthogonal question whether it undermines science too.
If that reasoning is based on having access to some kind of truth rather than pragmatical usefulness
>"naturalist realists don’t escape the problem"
Your argument for this point is rather terse, and it's not clear to me exactly how it is supposed to work. In particular, it's not clear what sort of modal machinery you're employing.
Following standard assumptions regarding metaphysical modality, P=N entails [](P=N), so the claim that our knowledge of P=N is insensitive (i.e. "if P had been identical to M rather than N, this would have made no causal difference to our beliefs about P") seems to involve a counterpossible conditional, and it's not obvious why a sensitivity requirement for moral knowledge would require such knowledge to be robust to counterpossible scenarios. (By analogy: we can have knowledge of mathematics, even if we would have come to the same mathematical beliefs in an [impossible] world where 2+2=5.)
Alternatively, and perhaps more clearly: What prevents your argument in this paragraph from being a general argument for the insensitivity of all of our natural knowledge?
(Williamson, IIRC, dislikes the sensitivity criterion for precisely this reason: he thinks it leads too easily to a very broad skepticism.)
Interesting point, though I simply think it's a question of epistemic possibility and probability, rather than metaphysical. After all, non-naturalism, if true, is plausibly also necessarily true, and the moral facts are necessarily what they are. So if that gets the naturalist out of the argument, it would also get the non-naturalist out of the argument.
As for whether it leads to skepticism about all natural knowledge, I guess I'm not quite sure what you mean here. Could you clarify what you mean by natural knowledge? If its knowledge of natural kinds or something, then I don't think natural kinds exist, so I don't think it would be a problem for me.
"What feature of epistemic principles makes these graspable, which doesn’t also allow moral truths to be grasped? "
The problem with the "evolutionary debunking" argument is its belief that morality is something physical that our own physicality could grasp only by coincidence. But morality isn't physical, it's conceptual. Specifically, morality is those set of principles that parties could not reasonably reject, under conditions where they can only offer reasons, rather than express power. They are graspable through our comprehension and acceptable of reasons that would guide our conduct to one another. We could even look to the market for guidance and see what terms people accept under fairly equal bargaining positions, to get a sense of the actual principles these hypothetical parties would agree to.
I've linked some posts below that may be helpful. I've found morality a bit stale, as I view the matter as settled. But I'm happy to dive back into it if you have any good counters.
https://neonomos.substack.com/p/what-is-morality
https://neonomos.substack.com/p/the-social-contract-part-1-why-is
Sorry for the late reply! I read the articles linked, and it's definitely interesting, though I think I disagree with almost everything you say, lol (but that's besides the point).
I don't think your position escapes the Darwinian dilemma. The problem is that you still need to know that what is morally right is what parties cannot reasonably reject. But a world where this isn't the case would look exactly the same, so you have no justification for thinking it's the case.
I suspect you will object that it's just sort of trivial because *if* you want to respect people's desires etc. etc. (whatever the justification for the moral facts being what you think they are), *then* you should order society such that parties would rationally agree to it being ordered that way. The problem is that an anti-realist can say the exact same thing. What you need for your view is also that it's *objectively correct* that we should respect people's desires, etc. etc., rather than, say, do what is against everyone's desires. But this claim isn't trivial in the sense you want it to be.
An anti-realist can still accept that hypothetical imperatives are objectively correct, they just can't say that some antecedent in the hypothetical imperative is to be preferred over another antecedent. This is exactly what moral realism requires, but one antecedent being more correct than another is a belief that the Darwinian dilemma defeats.
I suspect I might have misunderstood your position to some degree, so tell me why I'm wrong.
Thanks, I'll address each of your points below for clarity
>>The problem is that you still need to know that what is morally right is what parties cannot reasonably reject. But a world where this isn't the case would look exactly the same, so you have no justification for thinking it's the case.
The agreement is hypothetical - a counterfactual, purely conceptual. We use basic counter-factuals all the time to understand truth do we not? Yet the counter-factual doesn't happen in our world, but one where we imagine it to be - a fictional world that will be our guide in this one. And if there is *any* agreement that holds *independent* moral weight, it would be the "reasonable agreement behind the veil of ignorance" world (but I'm open to hearing about other, more just, possible worlds). So the principles created in that world would be the universal principles of morality.
Just as we can imagine the concept of a perfect circle without one ever having been drawn, we can imagine the concept of a reasonable agreement, despite it never having been created(and likely never will)
>>An anti-realist can still accept that hypothetical imperatives are objectively correct, they just can't say that some antecedent in the hypothetical imperative is to be preferred over another antecedent. This is exactly what moral realism requires, but one antecedent being more correct than another is a belief that the Darwinian dilemma defeats.
I agree 100%. No antecedents are more correct than any other. You would only value morality to the extent you value freedom, and value reason, and value reason over freedom. Reason, therefore, can justifiably bind freedom. "Morality" are those sets of principles that can bind freedom. BUT the anti-realist is correct in that if you don't have those values of freedom and reason, then morality is N/A. You gotta care about reason and freedom in that order to get morality.
See the below link on this point. Even just asking "why should I be moral" is accepting reason and freedom. in that order
https://neonomos.substack.com/p/why-should-i-be-moral
As that was a lot of typing, and as you may be able to tell by my post frequency, I don't like a lot of typing, I'd prefer not to type another long response. So if you have any further questions, please chat me and we can discuss live. But I'm fine with receiving only a written response here and noodling on it on my own.
To your first point, I think you might have misunderstood my objection. I am not here objecting to a hypothetical agreement being a ground for morality/what the content of morality would be IF moral realism is true. Rather my objection is this:
For your theory to be a realist theory it doesn't just need to claim "a society following such and such rules will best promote freedom and reason" or whatever. Rather you need the further claim that "promoting freedom and reason is morally good."
The first claim might very well be true--that is just a first-order moral issue. But that is irrelevant to realism/anti-realism. What you need is the second claim. But consider what the world would look like if the second claim were false, or if "hindering freedom and reason is morally good" were true. Well, the would would look exactly the same, since our evolutionary history and thinking processes would not be affected by it either way. Thus the debunking argument goes through.
As for your second point, it might make me worry that you are not actually a realist at all. After all, if no hypothetical imperative is more correct than another, then what do we mean when we say that there are objectively correct moral principles for action?
Although I am sort of sympathetic to a moral realism with only hypothetical imperatives. But for such a theory to work, you still need a hypothetical imperative of the form "if you want to act morally, then you should do X." But now the problem is knowing what "X" is. What you need here is the claim "doing X is morally right." But this is simply the problem from before--you have no way of knowing what the content of X is, since evolutionary history would have been exactly the same, regardless of what X is.
I think it would definitely be fun to chat about this at some point!
This has nothing to do with this article (sorry), but how do we make sense of moral subjectivism (the view that some moral propositions are true, though their truth is stance-dependent), especially as conceptually distinct from error theory? How can a proposition be stance-dependently true?
To take an example for which subjectivism is supposed to be more plausible, let X be: "vanilla ice cream is better than chocolate." It is true that I *prefer* vanilla, but that doesn't mean X is true *for me*, unless we defne "better" in such a way that X is a true description of my preferences. But this is just error theory: moral propositions are false, but people have preferences across states of affairs, and there can exist true propositions describing said preferences.
Or another example: Y = "Z is funny." When someone says Y is subjectively true, it seems like they're saying something like: Z is disposed to cause me to laugh or become amused (or perhaps Z is disposed to cause most people to laugh or become amused). Again, this is just error theory about funniness.
Unless subjectivism is just error theory (in which case why do we need a distinct concept?), it seems just as incoherent to me as general truth relativism.
That's an interesting question. It's definitely hard to make sense of, though one way could perhaps be to think that moral terms are indexicals.
For example, when I say "murder is wrong" it would be like saying "I like ice cream". This is only true when it alligns with facts from the perspective of the person saying it. "Murder is wrong" would then be true IFF the subjectivist conditions for moral truth are satisfied by the person saying/thinking it (or the group they're a part of or whatever).
In that sense it would be very similar to moral naturalism, but the difference is that rather than moral properties being identical with certain objective properties, they would be identical with certain indexical properties. So saying something is "wrong" wouldn't even make sense from "the view from nowhere," just like saying something is "here" doesn't make sense from that perspective.
The moral facts could of course still be described using stance independent language. You might say that "murder is wrong from Silas' perspective" or "Silas disapproves of murder". But that wouldn't be the same as saying "murder is wrong," just like saying "the ball is here" is not the same as saying "the ball is where Silas is".
That's at least my attempt at a steelman, but I'm not sure this is what a subjectivist would want to say.
This makes sense, though I still feel like the error theory neatly accommodates this. The error theorist will say: the stance-independent proposition "murder is wrong" is false, while the indexical statement "murder is wrong" is true, so long as it accords with the true preferences of the person/group saying/thinking it.
Maybe subjectivism requires conditions for moral truth that must be satisfied by the person/group, but are not congruent to her/their preferences. But I have no idea what those conditions could be.
Or maybe we could say that subjectivism and error theory (and non-cognitivism actually) all describe the same language-neutral metaethical beliefs, and differ only in their linguistic content: they have differnet descriptions of what a statement like "murder is wrong" expresses.
I don't think an error theorist would be able to capture the subjectivist notion of wrongness. A subjectivist would want to say that there really *is* a fact of the matter as to whether some things are right or wrong, it simply depends on perspectives.
The error theorist can of course say that there are truths of the form "murder satisfies [insert subjectivist conditions for wrongness] for me". But that is not the same as saying that murder is actually wrong (from my POV).
It might help to make an analogy to naturalist realism. The error theory can also say "murder satisfies [insert naturalist conditions for wrongness]". But of course error theory isn't equivalent to naturalist realism. This is because the naturalist thinks that that also makes murder *really wrong* in some substantive way. I think a parallel story holds for subjectivism.
I do think error theory and non-cognitivism do have the same picture of the world, and only differ in linguistic questions, but I don't think the same is true for subjectivism.
I simply can't grasp how if there is a fact of the matter about whether X obtains, it could depend on perspectives; on this reading, moral subjectivism seems like a special case of truth relativism, which I cannot make sense of.
On the other hand, using your analogy to naturalist realism - if murder is *really wrong* because both I and it satisfy [insert conditions about my subjective attitudes towards it], then this is just a special form of moral realism!
Well I suppose they would say that whether X obtains depends on perspectives because X is an indexical. It's not very strange that whether "I am happy" or "it's raining here" can depend on perspectives, and I don't think it requires truth relativism.
When I say "murder is wrong," on subjectivism I'm saying something like "I disapprove of murder." This clearly depends on who is saying it, and is true or false. The subjectivist would then make the further claim that it's *really true* that when I disapprove of something it's actually wrong. But again that just means that "wrong" is an indexical. Murder is *really wrong* from my perspective, and *really permissible* from Jeffery Dahmer's perspective.
But I agree that this theory is sort of mysterious, and I really think it is gonna depend on what you say about indexicals (something I haven't read/thought very much about).
How is that error theory? The subjectivist doesn't think moral propositions are false. They think their truth status is subjective.
Subjectivism is not error theory. I'm not sure how you're arriving at that conclusion.
Let's say we think all normative realism is debunked by evolutionary considerations. How does that lead to a skeptical scenario? All it would entail is that we should reject normative realism, but rejecting normative realism doesn't strike me as leading to self-defeat or to skepticism. What am I missing?