I think there are ways your views on metaethics can affect your views on normative ethics that aren't addressed here. The flaw in the argument is that it assumes everyone has some set of fundamental moral commitments that can't be changed by argument, unless they are shown to be inconsistent with other moral commitments. But that's not actually true - your views on metaethics could very easily affect what your baseline moral commitments are. Any theory about what moral facts are is probably also going to tell you a lot about what the contents of those moral facts is. In the article, you mentioned cultural relativism as an example of a metaethical view that also affects what you believe is right (since it implies that surveys could be used to determine what's right), but I think this is a much more general feature of almost all metaethical views.
There's also the problem that, if your moral commitments are shown to be inconsistent, you have to decide which way to update them. If there's no objective fact of the matter as to which view is correct, then you'll probably just update them in whatever way is most convenient to you - why not? - and you won't need to worry about having a weird and ad-hoc collection of moral commitments, with a bunch of unjustified exceptions. After all, your set of commitments isn't a theory about what's actually true, so you don't need to worry about factors like Occam's razor. You can just make them consistent in whatever way you feel like (unless you hold to something like societal relativism, but that affects your object-level moral views in other ways and also allows them to be ad-hoc as long as society follows ad-hoc rules, which it does).
On the other hand, moral realists are much more likely to have a theory as to exactly what makes an action right or wrong, since they actually believe there is a fact of the matter about this, and Occam's razor tells them the true theory is likely to be fairly simple. They're not going to accept new moral rules or ad-hoc exceptions if they don't think there's epistemic justification, and since convenience isn't a guide to truth, they won't change their moral beliefs to make them more convenient.
Good points! I'm not sure that you would just update your preferences in whichever way is most convenient if you're an anti-realist. Surely you would do so in whichever way fits best with the rest of your preferences (ideally at least)--otherwise you'd be choosing the preferences that you prefer less.
Now you raise the point that you're gonna have reasons from simplicity and whatnot on realism that you don't on anti-realism--so you can just gerrymander your preferences to be extremely complex but fit whatever is convenient. But I don't think this works on two counts:
First, I don't think people generally have specific scenarios or things as the objects of their preferences, but broader features. E.g. I don't care about chickens *per se* I care about suffering, and chickens can instantiate that. If so, then you can't just gerrymander your preferences about specific scenarios (like whether I prefer to eat meat while animals are harmed) without it fitting much less with your overall preferences than a more simple set.
Second, I think we have "higher-order" preferences about hypocrisy, consistency, etc. I dislike people who just make random exceptions to otherwise consistent preferences of theirs, and I want to be the kind of person who doesn't do that. And that will make me not want to just make exceptions when convenient.
More generally, whether or not realism is true, we have this phenomenon "moral beliefs" and we have to give some account of where they come from. I think the best account (at least for anti-realists) will be that they somehow derive from our basic evaluative tendencies. That is, we are sort of disposed to like or dislike certain things (perhaps from evolution), and this manifests in us judging (and perhaps believing) that certain things are good and bad, and right or wrong. If that is right then our moral beliefs will inevitably fit with what we're ultimately disposed to have corresponding attitudes towards, and so we should expect these attitudes to remain roughly the same, even if we take away the metaethical "superstructure" of counting them for true beliefs.
I just wanted to reply to a sliver of a portion of this comment. To your point about there not being a clear way on some anti-realist positions on how to update one’s set of moral beliefs when inconsistencies are highlighted within it, I don’t think it’s required at all on some meta-ethical views to pursue a coherent set of moral beliefs at all and so depending on the psychology of your interlocutors, you may not be able to appeal at all to the inconsistencies in the set of their moral beliefs to prompt them to change them. Suppose Hume turned out to be correct about the nature of moral judgement and its origins in the passions. Suppose also your debating whether or not you should be vegan with someone who’s moral psychology is best explained in Humean terms. Without out their independently desiring consistency amongst their moral beliefs, you lack the ability to persuade them into adopting veganism by appealing to what they might already believe about say sentience conferring moral worth to patients, or that animal cruelty is wrong and that conditions in which farm animals live and are killed are cruel.
I think you're conflating preference inconsistencies and non-ethical belief inconsistencies in the case of Antoinette.
Suppose moral anti-realism is true. Then there's nothing inconsistent if Antoinette prefers (sentient creatures harmed & I eat chicken) > (sentient creatures not harmed) > (sentient creatures harmed & no chicken for me). It would only be inconsistent if Antoinette prefers (sentient creatures are not harmed) > (sentient creatures are harmed & I eat chicken), but she still eats the chicken. But pointing out this inconsistency is not in the domain of ethical reasoning! The operative question becomes whether or not chickens are sentient creatures.
On anti-realism, it appears any so-called preference inconsistency will ultimately reflect a mistaken belief about the natural world, leaving no space for ethical reasoning.
I don't think I quite understand what you're trying to say? Just like it's not inconsistent for her to have the former preferences, it would not be inconsistent if the moral facts were: (sentient creatures harmed & Antoinette eats chicken) > (sentient creatures not harmed) > (sentient creatures harmed & no chicken for Antoinette).
It *would* be inconsistent if the moral facts were: (sentient creatures not harmed) > (sentient creatures harmed & Antoinette eats chicken), and then it was right to eat the chicken--just like it would be inconsistent for Antoinette to have these preferences and then prefer to eat the chicken.
So it just seems like the parity holds. But I feel like I'm not understanding what you're getting at?
Sorry, let me try to clarify by using general terms. Relinda the realist uses her moral intuitions to derive principles A and B, where principles here are mappings of situations to actions, or constraints on actions. Wanda the wise posits hypothetical situation H, in which A entails X and B entails ~X. Clearly, Wanda has demonstrated to Relinda that at least one of A or B are false, and Relinda must revise her beliefs about the correct moral principles.
I don't see the parity with Antoinette the anti-realist. She doesn't believe in principles; she just acts according to her situation-specific preferences in all situations. Perhaps the analogy is that Antoinette claims to act according to meta-preferences (heuristics?) A and B, which Wanda the wise can show to entail different actions in some situation H? So what? Antoinette can just continue to apply A and B to all situations ~H, and do whatever she wants if H obtains. Wanda's reasoning can't force Antoinette to revise her meta-preferences, because Antoinette can always apply a meta-meta-preference whenever they conflict.
What I'm trying to say is that ethical reasoning proceeds by showing where multiple ethical intuitions are in tension, forcing a revision to one's beliefs. Preferences don't work that way, because preferences, by construction, can't be in tension: you either prefer one thing or the other. What you could do is demonstrate to someone that their heuristic is wrong, because their mapping of actions to outcomes is wrong, but that's just non-ethical reasoning.
Perhaps "preferences" is a bad choice of word--that maybe suggests something like the all-considered judgement you come to in a particular case. Maybe "what she cares about" or just "evaluative judgements" or something would be better (what exactly it is, is probably gonna come down to the specifics of the anti-realist theory in question).
I generally don't think we care about specific things in specific situations. Like, it's not that I care about this chicken, but not that, rather I care about suffering and whatnot (maybe I care about the one, but that'll be due to some other general factor I care about, like relationships or affectional value). What I'm imagining is that Antionette cares about X and, say, doesn't care about Y, but those are inconsistent (like caring about causing animal suffering, but not caring about whether you buy eggs). In that case those seem inconsistent, if this caring is in the same sense. She can't decide whether or not she cares about buying eggs, until she weeds out the inconsistency.
Regarding conflicts in situations that never obtain, I'm not sure I agree. Sure, if A and B never actually conflict then it'll never be relevant, so follow whichever you want! The problem is that these things usually also disagree in actual cases (e.g. whether I care about doing vs. allowing or not). I'm not sure that ethicists very often think about theories that would never be relevant in any actual cases.
But if I have two "carings" that conflict in actual cases, figuring out which one best fits with what I care about generally will involve also looking at what I care about in hypothetical cases, since those can help tease out my "carings".
Final note: The realist can also overall prefer to eat the chicken, even if she thinks that it's morally wrong. If motivational externalism is right, then that's easy, but even if internalism is right, she can overall prefer eating it. In fact it looks like it'll be basically like the antirealist who cares about suffering, but overall prefers to eat it.
I don’t exactly disagree, but I think you’re assuming that the anti-realist must view a kind of principled consistency/coherence as a constraint in the same way the realist does. I think these accounts get trickier than you do: https://joecarlsmith.com/2023/02/16/why-should-ethical-anti-realists-do-ethics/ (although in his second piece in this series, he concludes that there is a compelling case for anti-realists to curve-fit anyway).
I’m not sure I buy this. I will hear an anti-realist claim that they don’t care about something, and they’ll take that as good evidence for some ethical belief of their’s being right. But of course such appeals look very different under a realist lens. In that they don’t necessarily count as any good evidence for some ethical view.
Well I think that if they genuinely don't care about it, there's nothing you can say to them--but likewise if someone said they genuinely didn't believe that something was wrong. But in many cases both will care about/believe other things that should make them care about this thing as well.
There are a million things I could say here, since I think this question exists at the intersection of so many important issues and you have to do a ton of work to even make it clear exactly what's being asked, but restricting myself to one point: I generally agree with you that the existence or non-existence of moral facts doesn't play any major role in how we actually deliberate in a descriptive sense. But I don't think much follows from that, just like nothing much follows from the fact that I could still ask for and get good first-order directions to a certain city from an idealist who doesn't actually believe the external world exists. A better question is whether or not those discourse practices we're going to engage in no matter what are *justified* in the absence of moral facts, and I'd contend they aren't - there are many features of moral discourse that I just don't think make any sense if the subject in question isn't a set of objective facts. So while it can be a neutral, descriptive fact that people will speak and act a certain way regardless of whether moral facts exist, there might be second-level normative truths about whether or not they *should* talk that way given the beliefs they hold. And I think that's what moral realists really tend to care about.
But of course what complicates things is that anti-realists who are anti-realists about normativity more broadly don't think those justificatory standards actually exist in the first place, so of course they don't see any issue - there's no "right" way to engage based on your subjective preferences, so they can engage however they want. So in some sense I think someone who is an anti-realist across the board can say metaethics doesn't matter and remain internally consistent (although I'm not sure that's a very high bar since a lot of crazy stuff would be consistent with that view). On the other hand, if you're the type of anti-realist who accepts other sorts of normative constraints on rationality, judgment, etc then I think the existence or non-existence of moral facts *should* matter to how you debate, because I think those other constraints do set limits on what you're "allowed" to say based on what sort of entities you accept to back your statements up. The fact that those anti-realists disagree is important pragmatically, because it means appeals to those facts won't work, but that's not a unique problem for moral realism - every person who believes X is justified by Y will try different avenues to get X when they're talking with someone who doesn't accept Y, right? If I'm talking to a flat earther who doesn't accept NASA data as a justification for something, then I'm not going to bring it up in a discussion, but that doesn't imply that NASA data has nothing at all to do with the topic at hand or that it's actually indispensable. (I'm not comparing anti-realists to flat earthers, haha, just trying to give an obvious example).
So yeah, that's my take here: It's true that moral discourse in real life doesn't hinge on the existence or non-existence of moral facts, and it makes sense that that's all anti-realists would care about. But if you think moral facts do exist, then they obviously play an essential justificatory role for the foundation of that discourse, and the mere fact that the general form and structure of the discourse would remain unchanged in a descriptive sense shouldn't be a big issue for you. I bet I would believe Donald Trump was a bad president even if a lot of the facts about his actions were totally inverted, just because my personal subjective animus towards him is so strong, but in that case my judgments would be objectively wrong rather than objectively right, and that should matter!
I pretty much think I agree with you! And you're right, I don't think I've made the scope and specifics of what I'm claiming quite clear enough--that would probably take some work. Also to be clear, this isn't supposed to be an argument for anti-realism. I think I lean more towards realism, though I've become more agnostic lately.
Although surely, even if all of this is right, it still matters deeply for various kinds of *ethical decision making*, right? I.e. if I’m deciding how much, or whether, to donate my money to charity, on moral realism I should (ideally) not do any weighting with my desires for my personal well-being, and think about things strictly impersonally (or whatever my theory prescribes). But on anti-realism, it seems like I should donate only to the extent that my preferences for, e.g., there being no suffering outweigh my personal preferences for my life being great.
Metaethics is not divorced from preferences. Being a moral realist can have the real psychological effect of causing one to have a stronger preference against things one thinks are wrong. Similarly the opposite can happen to those who were moral realists but shift to moral anti realist positions.
It can also affect what preferences one adopts. We have some influence over whether or not we are willing to be swayed by moral arguments, or how much we care about consistency, and because the moral realist thinks these things matter objectively they can be more motivated to find the actual right answers to these questions.
Finally when a particular moral belief causes us psychological tension, if the moral realist holds it to be true they may be willing to accept that tension in a way that the anti realist might just choose to adopt different preferences or beliefs.
Although I agree that metaethics mostly doesn't matter for ethics, it seems to me that there are some important debates in normative ethics where metaethics may play a significant role. For example, Guy Kahane argues – convincingly, in my view – that evolutionary debunking arguments against deontology (not to be confused with more global evolutionary debunking arguments in metaethics) work only if we assume a robust form of moral realism. This probably has some practical importance as well – after all, it sometimes matters whether we're deontologists or not.
I think there are ways your views on metaethics can affect your views on normative ethics that aren't addressed here. The flaw in the argument is that it assumes everyone has some set of fundamental moral commitments that can't be changed by argument, unless they are shown to be inconsistent with other moral commitments. But that's not actually true - your views on metaethics could very easily affect what your baseline moral commitments are. Any theory about what moral facts are is probably also going to tell you a lot about what the contents of those moral facts is. In the article, you mentioned cultural relativism as an example of a metaethical view that also affects what you believe is right (since it implies that surveys could be used to determine what's right), but I think this is a much more general feature of almost all metaethical views.
There's also the problem that, if your moral commitments are shown to be inconsistent, you have to decide which way to update them. If there's no objective fact of the matter as to which view is correct, then you'll probably just update them in whatever way is most convenient to you - why not? - and you won't need to worry about having a weird and ad-hoc collection of moral commitments, with a bunch of unjustified exceptions. After all, your set of commitments isn't a theory about what's actually true, so you don't need to worry about factors like Occam's razor. You can just make them consistent in whatever way you feel like (unless you hold to something like societal relativism, but that affects your object-level moral views in other ways and also allows them to be ad-hoc as long as society follows ad-hoc rules, which it does).
On the other hand, moral realists are much more likely to have a theory as to exactly what makes an action right or wrong, since they actually believe there is a fact of the matter about this, and Occam's razor tells them the true theory is likely to be fairly simple. They're not going to accept new moral rules or ad-hoc exceptions if they don't think there's epistemic justification, and since convenience isn't a guide to truth, they won't change their moral beliefs to make them more convenient.
Good points! I'm not sure that you would just update your preferences in whichever way is most convenient if you're an anti-realist. Surely you would do so in whichever way fits best with the rest of your preferences (ideally at least)--otherwise you'd be choosing the preferences that you prefer less.
Now you raise the point that you're gonna have reasons from simplicity and whatnot on realism that you don't on anti-realism--so you can just gerrymander your preferences to be extremely complex but fit whatever is convenient. But I don't think this works on two counts:
First, I don't think people generally have specific scenarios or things as the objects of their preferences, but broader features. E.g. I don't care about chickens *per se* I care about suffering, and chickens can instantiate that. If so, then you can't just gerrymander your preferences about specific scenarios (like whether I prefer to eat meat while animals are harmed) without it fitting much less with your overall preferences than a more simple set.
Second, I think we have "higher-order" preferences about hypocrisy, consistency, etc. I dislike people who just make random exceptions to otherwise consistent preferences of theirs, and I want to be the kind of person who doesn't do that. And that will make me not want to just make exceptions when convenient.
More generally, whether or not realism is true, we have this phenomenon "moral beliefs" and we have to give some account of where they come from. I think the best account (at least for anti-realists) will be that they somehow derive from our basic evaluative tendencies. That is, we are sort of disposed to like or dislike certain things (perhaps from evolution), and this manifests in us judging (and perhaps believing) that certain things are good and bad, and right or wrong. If that is right then our moral beliefs will inevitably fit with what we're ultimately disposed to have corresponding attitudes towards, and so we should expect these attitudes to remain roughly the same, even if we take away the metaethical "superstructure" of counting them for true beliefs.
I just wanted to reply to a sliver of a portion of this comment. To your point about there not being a clear way on some anti-realist positions on how to update one’s set of moral beliefs when inconsistencies are highlighted within it, I don’t think it’s required at all on some meta-ethical views to pursue a coherent set of moral beliefs at all and so depending on the psychology of your interlocutors, you may not be able to appeal at all to the inconsistencies in the set of their moral beliefs to prompt them to change them. Suppose Hume turned out to be correct about the nature of moral judgement and its origins in the passions. Suppose also your debating whether or not you should be vegan with someone who’s moral psychology is best explained in Humean terms. Without out their independently desiring consistency amongst their moral beliefs, you lack the ability to persuade them into adopting veganism by appealing to what they might already believe about say sentience conferring moral worth to patients, or that animal cruelty is wrong and that conditions in which farm animals live and are killed are cruel.
I think you're conflating preference inconsistencies and non-ethical belief inconsistencies in the case of Antoinette.
Suppose moral anti-realism is true. Then there's nothing inconsistent if Antoinette prefers (sentient creatures harmed & I eat chicken) > (sentient creatures not harmed) > (sentient creatures harmed & no chicken for me). It would only be inconsistent if Antoinette prefers (sentient creatures are not harmed) > (sentient creatures are harmed & I eat chicken), but she still eats the chicken. But pointing out this inconsistency is not in the domain of ethical reasoning! The operative question becomes whether or not chickens are sentient creatures.
On anti-realism, it appears any so-called preference inconsistency will ultimately reflect a mistaken belief about the natural world, leaving no space for ethical reasoning.
I don't think I quite understand what you're trying to say? Just like it's not inconsistent for her to have the former preferences, it would not be inconsistent if the moral facts were: (sentient creatures harmed & Antoinette eats chicken) > (sentient creatures not harmed) > (sentient creatures harmed & no chicken for Antoinette).
It *would* be inconsistent if the moral facts were: (sentient creatures not harmed) > (sentient creatures harmed & Antoinette eats chicken), and then it was right to eat the chicken--just like it would be inconsistent for Antoinette to have these preferences and then prefer to eat the chicken.
So it just seems like the parity holds. But I feel like I'm not understanding what you're getting at?
Sorry, let me try to clarify by using general terms. Relinda the realist uses her moral intuitions to derive principles A and B, where principles here are mappings of situations to actions, or constraints on actions. Wanda the wise posits hypothetical situation H, in which A entails X and B entails ~X. Clearly, Wanda has demonstrated to Relinda that at least one of A or B are false, and Relinda must revise her beliefs about the correct moral principles.
I don't see the parity with Antoinette the anti-realist. She doesn't believe in principles; she just acts according to her situation-specific preferences in all situations. Perhaps the analogy is that Antoinette claims to act according to meta-preferences (heuristics?) A and B, which Wanda the wise can show to entail different actions in some situation H? So what? Antoinette can just continue to apply A and B to all situations ~H, and do whatever she wants if H obtains. Wanda's reasoning can't force Antoinette to revise her meta-preferences, because Antoinette can always apply a meta-meta-preference whenever they conflict.
What I'm trying to say is that ethical reasoning proceeds by showing where multiple ethical intuitions are in tension, forcing a revision to one's beliefs. Preferences don't work that way, because preferences, by construction, can't be in tension: you either prefer one thing or the other. What you could do is demonstrate to someone that their heuristic is wrong, because their mapping of actions to outcomes is wrong, but that's just non-ethical reasoning.
Perhaps "preferences" is a bad choice of word--that maybe suggests something like the all-considered judgement you come to in a particular case. Maybe "what she cares about" or just "evaluative judgements" or something would be better (what exactly it is, is probably gonna come down to the specifics of the anti-realist theory in question).
I generally don't think we care about specific things in specific situations. Like, it's not that I care about this chicken, but not that, rather I care about suffering and whatnot (maybe I care about the one, but that'll be due to some other general factor I care about, like relationships or affectional value). What I'm imagining is that Antionette cares about X and, say, doesn't care about Y, but those are inconsistent (like caring about causing animal suffering, but not caring about whether you buy eggs). In that case those seem inconsistent, if this caring is in the same sense. She can't decide whether or not she cares about buying eggs, until she weeds out the inconsistency.
Regarding conflicts in situations that never obtain, I'm not sure I agree. Sure, if A and B never actually conflict then it'll never be relevant, so follow whichever you want! The problem is that these things usually also disagree in actual cases (e.g. whether I care about doing vs. allowing or not). I'm not sure that ethicists very often think about theories that would never be relevant in any actual cases.
But if I have two "carings" that conflict in actual cases, figuring out which one best fits with what I care about generally will involve also looking at what I care about in hypothetical cases, since those can help tease out my "carings".
Final note: The realist can also overall prefer to eat the chicken, even if she thinks that it's morally wrong. If motivational externalism is right, then that's easy, but even if internalism is right, she can overall prefer eating it. In fact it looks like it'll be basically like the antirealist who cares about suffering, but overall prefers to eat it.
I don’t exactly disagree, but I think you’re assuming that the anti-realist must view a kind of principled consistency/coherence as a constraint in the same way the realist does. I think these accounts get trickier than you do: https://joecarlsmith.com/2023/02/16/why-should-ethical-anti-realists-do-ethics/ (although in his second piece in this series, he concludes that there is a compelling case for anti-realists to curve-fit anyway).
I’m not sure I buy this. I will hear an anti-realist claim that they don’t care about something, and they’ll take that as good evidence for some ethical belief of their’s being right. But of course such appeals look very different under a realist lens. In that they don’t necessarily count as any good evidence for some ethical view.
Well I think that if they genuinely don't care about it, there's nothing you can say to them--but likewise if someone said they genuinely didn't believe that something was wrong. But in many cases both will care about/believe other things that should make them care about this thing as well.
But it strikes me as a case where metaethics matters in ethical discussions. Namely with what moves are legitimate right.
There are a million things I could say here, since I think this question exists at the intersection of so many important issues and you have to do a ton of work to even make it clear exactly what's being asked, but restricting myself to one point: I generally agree with you that the existence or non-existence of moral facts doesn't play any major role in how we actually deliberate in a descriptive sense. But I don't think much follows from that, just like nothing much follows from the fact that I could still ask for and get good first-order directions to a certain city from an idealist who doesn't actually believe the external world exists. A better question is whether or not those discourse practices we're going to engage in no matter what are *justified* in the absence of moral facts, and I'd contend they aren't - there are many features of moral discourse that I just don't think make any sense if the subject in question isn't a set of objective facts. So while it can be a neutral, descriptive fact that people will speak and act a certain way regardless of whether moral facts exist, there might be second-level normative truths about whether or not they *should* talk that way given the beliefs they hold. And I think that's what moral realists really tend to care about.
But of course what complicates things is that anti-realists who are anti-realists about normativity more broadly don't think those justificatory standards actually exist in the first place, so of course they don't see any issue - there's no "right" way to engage based on your subjective preferences, so they can engage however they want. So in some sense I think someone who is an anti-realist across the board can say metaethics doesn't matter and remain internally consistent (although I'm not sure that's a very high bar since a lot of crazy stuff would be consistent with that view). On the other hand, if you're the type of anti-realist who accepts other sorts of normative constraints on rationality, judgment, etc then I think the existence or non-existence of moral facts *should* matter to how you debate, because I think those other constraints do set limits on what you're "allowed" to say based on what sort of entities you accept to back your statements up. The fact that those anti-realists disagree is important pragmatically, because it means appeals to those facts won't work, but that's not a unique problem for moral realism - every person who believes X is justified by Y will try different avenues to get X when they're talking with someone who doesn't accept Y, right? If I'm talking to a flat earther who doesn't accept NASA data as a justification for something, then I'm not going to bring it up in a discussion, but that doesn't imply that NASA data has nothing at all to do with the topic at hand or that it's actually indispensable. (I'm not comparing anti-realists to flat earthers, haha, just trying to give an obvious example).
So yeah, that's my take here: It's true that moral discourse in real life doesn't hinge on the existence or non-existence of moral facts, and it makes sense that that's all anti-realists would care about. But if you think moral facts do exist, then they obviously play an essential justificatory role for the foundation of that discourse, and the mere fact that the general form and structure of the discourse would remain unchanged in a descriptive sense shouldn't be a big issue for you. I bet I would believe Donald Trump was a bad president even if a lot of the facts about his actions were totally inverted, just because my personal subjective animus towards him is so strong, but in that case my judgments would be objectively wrong rather than objectively right, and that should matter!
I pretty much think I agree with you! And you're right, I don't think I've made the scope and specifics of what I'm claiming quite clear enough--that would probably take some work. Also to be clear, this isn't supposed to be an argument for anti-realism. I think I lean more towards realism, though I've become more agnostic lately.
Although surely, even if all of this is right, it still matters deeply for various kinds of *ethical decision making*, right? I.e. if I’m deciding how much, or whether, to donate my money to charity, on moral realism I should (ideally) not do any weighting with my desires for my personal well-being, and think about things strictly impersonally (or whatever my theory prescribes). But on anti-realism, it seems like I should donate only to the extent that my preferences for, e.g., there being no suffering outweigh my personal preferences for my life being great.
Metaethics is not divorced from preferences. Being a moral realist can have the real psychological effect of causing one to have a stronger preference against things one thinks are wrong. Similarly the opposite can happen to those who were moral realists but shift to moral anti realist positions.
It can also affect what preferences one adopts. We have some influence over whether or not we are willing to be swayed by moral arguments, or how much we care about consistency, and because the moral realist thinks these things matter objectively they can be more motivated to find the actual right answers to these questions.
Finally when a particular moral belief causes us psychological tension, if the moral realist holds it to be true they may be willing to accept that tension in a way that the anti realist might just choose to adopt different preferences or beliefs.
Although I agree that metaethics mostly doesn't matter for ethics, it seems to me that there are some important debates in normative ethics where metaethics may play a significant role. For example, Guy Kahane argues – convincingly, in my view – that evolutionary debunking arguments against deontology (not to be confused with more global evolutionary debunking arguments in metaethics) work only if we assume a robust form of moral realism. This probably has some practical importance as well – after all, it sometimes matters whether we're deontologists or not.