Kant's Ethics Sucks, but in an Interesting Way
I wanted to make a pun with his name for the title, but I just Kant do it.
What if I told you that you could derive the categorical impermissibility of masturbation right from the armchair? Pretty sweet deal right? Well, you need look no further than 18th-century German Immanuel Kant.
A lot of people (perhaps rightfully) dismiss Kant’s moral philosophy due to counterexamples involving Nazis and axe-murderers. However Kant’s project was actually very interesting and ambitious (if not unsuccessful), and we’re doing ourselves a disservice if we don’t look at what he was trying to do—in fact, if his project were successful, I’m not sure that intuitive counterexamples would be particularly problematic for him. Let us then look at what he was trying to do and whether it works (spoiler: probably not).1
The Project In (very rough) Outline
Kant famously starts the Groundwork by arguing that the only unconditionally good thing is a good will. Everything else might be bad in other cases (intelligence can be used for bad, and happiness might befall bad people). This is sort of a bad start, as he doesn’t really provide any particularly compelling argument to this effect (in my humble opinion). Yet I also don’t think it is as crucial to the project as it might appear.
Anyhow, let’s grant Kant this point. He then goes on to outline some distinctions and assumptions for his project, which I’ll recap at breakneck speed:
All actions derive from determinations of the will.
The will must be determined according to a maxim.
Maxims are one of two types of principles for action:
Maxims are subjective principles for individuals (e.g. I will subscribe to Wonder and Aporia).
Practical laws are objective principles applying to everyone (e.g. rational agents ought to act on universalizable maxims).
Practical laws can be expressed as imperatives, of which there are two kinds:
Hypothetical imperatives depend on antecedent conditions (if you want to be happy, then subscribe to Wonder and Aporia).
Categorical imperatives are unconditional, universal, and necessary (don’t murder).
A subset of practical laws are moral laws, namely those that can be expressed as categorical imperatives.
This last step is crucial, and should be somewhat intuitive (though not uncontroversial). Kant’s argument for this is transcendental in nature—as so many things are for Kant: We find ourselves with the datum from moral experience that we often feel the pull of a duty that is independent of our contingent circumstances and desires. He then works back to the conditions for this, which he takes it is a duty which is unconditional and universal, i.e. one that can be expressed as a categorical imperative.
Hence we get the conclusion that moral law is whatever can be expressed as a categorical imperative. And it’s here where the really fascinating part comes! Kant believes that we can simply derive what the content must be from the form of a categorical imperative. After all, I must know the content of a categorical imperative simply from thinking about it as, unlike with hypothetical imperatives, there is no condition, and so no extra information that needs to be added before I know what to do.
I really want to stress this, as I think it’s a super interesting idea: We can derive the content of morality formally, simply from considering the concept of what a duty is—no messy intuition pumps and counterexamples. This is incredibly big if true, and very ambitious; even if it doesn’t succeed (as I doubt it does).
So what is the content? It’s actually pretty straightforward. Since categorical imperatives are universal, the only condition must simply be that whatever it commands must be something that can be universally commanded. Hence we get the formulation:
Act only in accordance with that maxim through which you can at the same time will that it become a universal law.
In other words: Something can be issued as a categorical imperative (and is thus a moral law) just in case it can coherently be universally willed. This looks pretty straightforward!
Kant goes on to give other formulations he takes to be equivalent (though it’s not quite clear exactly in what sense they’re supposed to be equivalent), but I’ll limit my scope here to the universalizability formulation.
Where Things Get Complicated
Yet I don’t believe it works, sadly. (I’ll grant all the background assumptions about what human actions are and whatnot for the sake of the argument.)
What Type of Coherence?
Firstly it’s unclear whether the derivation of the universalizability formulation goes through. Kant distinguishes two kinds of contradictions:
A contradiction in conception (you can’t even conceive of the thing happening).
A contradiction in will (you can’t rationally will that it happen).
We can plausibly allow that the argument goes through on the first type of contradiction. I mean, if it’s literally impossible that everyone follow some command, surely it cannot be a universal law for everyone. Yet Kant needs the second type of contradiction to get the things he wants: “Kill yourself” could conceivably be followed by everyone, even if it’d suck!2
He needs the idea of necessary ends—ends that any rational being must have. If it’s right that there are these, then he can argue that if some universalized maxim would undermine a necessary end, then willing it would undermine what you should rationally will.
However, now we just seem to be arguing in a circle! We were supposed to get the content of morality from considering the form of categorical imperatives. But now we’re invoking the idea of what ends a rational agent must have—ostensibly a roundabout way of saying what ends we ought to have. That just seems to be smuggling the content in the backdoor! Put crudely, we need to know what we should do before we can use the categorical imperative to figure out what we should do.
Individuation
Perhaps the biggest issue (at least with this formulation of the CI) is that it leaves open how specific we are about our maxims. When Lyman Stone is at my door knocking, asking if I am hiding any shrimp in the attic, I might act on a number of increasingly specific maxims:
I will lie.
I will lie when doing so saves shrimp.
I will lie when doing so saves shrimp from being painfully executed and eaten by Lyman Stone in July 2025.
Perhaps 1 is not universalizable, but 2 might very well be, and it certainly seems like 3 is! This seems to lead to contradiction, though! If we choose 1, the CI tells me that I should not lie, but if we choose 3 it tells me that I may lie. What we need is some way of specifying which way of individuation is correct.
Kant sadly doesn’t give us much of this. The best bet seems to be that maxims should be individuated by the ends or real reasons the agent has for following them. So if my end here is to deceive I choose 1, if it is to save shrimp I choose 2, and if I just want to stick it to Lyman Stone in the situation I choose 3.
There are a couple of problems here, though. Firstly, it doesn’t seem to get the conclusions that Kant wants. We surely don’t get universal prohibitions on lying, as my ends in lying might often allow universalizable maxims (like saving shrimp).
Secondly—and in line with this—it looks like we just get back to the issue from before: Won’t my maxim then always be universalizable, as universalizing it will (presumably) achieve my end? It looks like we need some substantive idea of necessary ends again.
Revisiting Ends
It seems we can’t avoid the humanity formula after all, then, as his move to get to it involves an argument for what ends we ought to have. Morality cannot rest on contingent ends, Kant argues, and so there must be something of inherent value. This must be rational nature (defined as the ability to act according to maxims), as that is the precondition for morality to begin with. That is, it would be self-defeating for rational nature to have an end that undermines rational nature.
This might also answer the first objection: We’re not actually going in a circle, as we can derive the necessary ends from what rational nature could possibly have as an end, and so we get the apparently problematic assumptions for free. Thus a lot hinges on this point.
I must say, I just don’t really see the inference working. I feel like I have been sort of charitable in allowing certain leaps, but I mean, really? Suppose that there were a categorical imperative telling you to have some end that sometimes treated rational natures as mere means. This would, by definition, apply to everyone, and so the idea, I take it, is that we necessarily get trouble when everyone follows it.
But surely not! Consider the end of allowing pareto improvements. This means that you might adopt the maxim “I will lie to you at time t, such that you never find out, are never harmed, and I save a life from it, etc. etc. (fill in anything you want)” [where you means I and I mean you]. You following this maxim would use me as a mere means to an end, yet why shouldn’t I be able to will it? In fact, I do will it! What is the problem supposed to be here? This then looks like a counterexample to the inference he is making.
What About Counterexamples?
At the start I mentioned that intuitive counterexamples won’t work given that Kant’s project actually gets of the ground. It should be clear by now why this is the case: We don’t figure out what we ought to do by considering how plausible certain judgements are in certain cases, but simply by considering the form of the moral law.
Yet intuitive counterexamples still might matter—it’s simply about attacking more foundational parts of his project. Remember, the project rests on a transcendental inference from our moral experience to a moral law. However, if we consider cases like the axe- shrimp-murderer at the door, we don’t have a moral experience of being compelled to tell the truth—if anything we feel compelled to lie!
We might then make the following reductio:
We are compelled by a moral law.
If we are compelled by a moral law, then we should not lie to save a life [from Kant’s whole argument].
But we should lie to save a life!
Both (3) and (1) are supported by our moral experience, and it’s then a question of which of the three premises is least plausible—which will probably not be (3).
So counterexamples still are important, but for a quite different reason than in ordinary ethical theorizing.
Epilogue: Should Kant Be a Utilitarian?
This is just a kind of shower-thought I had while writing this: Assuming we accept that morality is specified with categorical imperatives, it seems like the best candidate form the imperative might take would be some form of consequentialism.
Why think this? This echoes some things I’ve written here and here, but it basically comes down to agent-relativity. The idea is that if a theory is agent-relative (in the sense that your theory-given aims vary depending on who you are), then it is very likely to be collectively self-defeating3—i.e. everyone following hurts the achieving of the aims.
But remember, a categorical imperative must apply to everyone, and so we surely can’t will that such self-defeating theories become universal law! The easiest way to escape this seems to be adopting some kind of agent-neutral theory, and it’s hard to have this without it being consequentialist.4 Hence it seems the natural conclusion is consequentialism!
Now, I don’t think we get all the way to consequentialism, nor to any particular kind of consequentialism. What I think it really shows is that Kant might have been a little too ambitious. Simply building from what a categorical imperative is underdetermines your moral theory, and you have to smuggle in a lot of assumptions and make some suspicious inferences to get the conclusions you want.
Still, history of philosophy isn’t necessarily interesting because it’s right, but because it attempts some very interesting and ambitious projects, and this Kant certainly does!
You Might Also Like:
Yes this post is a capitalization on the work I have done throughout the recent exam season (all top grades btw (thank you for asking)). I might write more posts based on work I already did to prep for the exams.
For this reason also don’t expect pristine scholarly work; I apologize for any inaccuracies.
Well, he uses both types. A contradiction of conception leads to a perfect duty and a contradiction in will leads to an imperfect duty. Funnily enough he thinks suicide (from self-love) leads to a contradiction in conception, but it’s hard for me to see how his argument here could work in a way where it also doesn’t show that it is impossible for any single person ever to commit suicide out of self-love in which case the conclusion is moot.
Agent-relative theories won’t necessarily be self-defeating. For example Parfit (whom I’m stealing a lot from here) suggested that agent-relative commonsense morality can be made non-self-defeating by adding a clause that you will cooperate when the agent-relative aims conflicting would otherwise lead to self-defeat.
I actually think all agent-neutral theories are consequentialist if we’re loose enough with how we define “consequentialism.” Take some agent-neutral theory. By the definition I gave, this means that there is a theory-given aim common to everyone. Now, let’s just define an action as having good consequences to the extent that it helps achieve this theory-given aim, and there we go!
There are probably some counterexamples to this I’m too lazy to think of though.
I like your blog, but this article is of shockingly low quality. It reads as if someone never read a single piece of Kant, let alone any Neo-Kantian like Korsgaard or O'Neill, and instead thought reading 20 minutes of wikipedia was enough to discredit the most influential moral philosopher in history.
There's a paper somewhere about "finagling the categorical imperative", i.e., in order for the CI to work, we have to engage with it with the good will that the approach presupposes (otherwise we'd need to provide a more substantial critique of why goodwill alone isn't sufficient for Kant's purposes). If I find it, I'll try to remember to link it over to you.
I'd be concerned that the "egregious specifics" problem might be undermining the overall point by engaging with the CI without the goodwill it asks for, without explaining why we have abandoned goodwill at this point.