If you’ve ever spoken with an insufferable, mildly drunk philosophy undergrad at a bar, or thought about starting a podcast, you probably know what skepticism is: “Dude, like, how do you even know you have hands, bro?” In general, skepticism denies knowledge about a certain domain, though the most philosophically interesting form might be external world skepticism, which denies that we have knowledge about… well, the external world.
Arguments for external world skepticism usually run along the following lines: Some skeptical hypothesis (SH) is constructed, which explains all the things that make us think there is an external world, but without there actually being such a world (or at least it being wildly different from what it appears). Now, since we don’t know that all SH’s are false, we can’t know that the real world hypothesis (RWH) is true, and so cannot know that there is an external world. With this goes all our knowledge about ordinary external objects such as tables and chairs (does anyone know why these are the go-to example of ordinary objects?). Obviously there have been many responses to this style of argument, such as denying the closure of knowledge, claiming that it is impossible to formulate skeptical hypotheses, or giving Moorean “proofs.” My preferred option is simply to say that we can know that the SH’s are false.
This is not a particularly original insight, but the idea is simply that RWH is the best explanation of our total evidence. So while we cannot be absolutely sure that no SH is true, we can be sure enough. In good ol’ philosopher style, I want to frame the argument in terms of a dilemma. For any SH, either:
SH is about as complex as RWH, but doesn’t predict our evidence as well
or
SH predicts our evidence as well as RWH, but is much more complex.
Let’s look at the first horn first. What does RWH say? Well, as a general formulation, it says something along the lines of: there is a world of physical objects, and these cause our sensory experiences to give us accurate perceptions. The general form of SH will depend on which hypothesis we have in mind, but something like, e.g., a brain-in-a-vat hypothesis will as a general formulation be: you are a brain in a vat, and this brain is stimulated to have experiences. On these general formulations, it seems like the two hypotheses are roughly equally simple. They each postulate some sort of world, and some sort of mechanism for our experiences, and neither is obviously more a priori implausible than the other. But consider now what each theory predicts. The set of experiences compatible with RWH is dominated by experiences that appear like a 3d environment, with objects that persist through time, coherence between different senses, etc., etc. SH will include all these sets of experiences as possibilities, but it will also equally predict many more possible experiences, including experiences of white noise, incoherent senses, glitching of the world, etc., etc. That is, the set of experiences I am currently having is a much smaller proportion of the experiences compatible with SH, than of those compatible with RWH. And since it’s not clear that SH in this general formulation has strong preference for my current set of experiences, compared with RWH, SH is very strongly disconfirmed by these.1
Putting it much more briefly: if SH were true, you should expect incoherent white noise, and if RWH were true, you should expect something along the lines of your current experiences. So my current experiences make RWH much more likely than SH.
This leads into the second horn. After all, SH will not just say that you are, say, a brain in a vat, but that you’re a brain in a vat created by scientists with certain intentions, etc. In adding these auxiliary assumptions, we reduce the likelihood of incoherent experiences given SH, making our current experiences more predicted. Still, the hypothesis is very general, and many specifications of it are incompatible with our current experiences—for example that I am a brain in a vat, and the scientists responsible want me to have an experience as of being on a rollercoaster, and are successful. So while we’re at it, we might as well reduce the hypotheses under consideration to those that strongly predict the entirety our current experiences. For RWH that will roughly be those that entail that there are the physical objects around me that appear to be there, and these cause my perceptions—though the farther things are from my immediate perception, the less it will need to specify about them.2
SH has a lot more leeway. Focusing on brain-in-a-vat hypotheses, it might be that the scientists simulate a physical world and stimulate my brain in accordance with it; or that they manually program the sequences of electrical impulses sent to my brain; or wanted me to have the experiences of Donald Trump, but were very bad programmers; or… In common between the SH’s we consider here is simply that they are so precisely specified that they either entail, or with very high probability predict, that we have out current experiences—and that these are produced in a non-veridical manner. The question then becomes whether RWH makes the prediction in a simpler way, or whether some SH can do the job better.
One tempting thought is that SH’s win out on parsimony grounds; we might think that it is simpler to postulate my experiences than to postulate some external object causing my experiences, since the latter has more entities. I don’t think this is the case though. The reason is that there being an external object gives a compact and unified explanation for the structure of our experiences—both synchronically and diachronically—and so if we actually were to fully spell out each theory, the one simply postulating my experiences would be a lot more complex. An analogy to see this: You are designing a videogame consisting of a player being able to walk around a room with a cup in it. You might produce the patterns of color on the monitor in two different ways: You could either manually script the color of each pixel at each time, and fill in how these colors are to change depending on the player input over time. This would obviously be an enormous undertaking!3 Alternatively, you could define the geometry of the room, cup, and player; specify some rules for how the player moves given inputs, as well as how colors are drawn on the screen, given the position and angle of the player, etc. While this would still be pretty demanding to do from scratch, it would clearly be much, much, much simpler than manually programming in every color of every pixel. So while simply stipulating the experiences without the corresponding external objects does postulate fewer entities, it will be way more complex overall.
This means that all the best candidate hypotheses will have to include the structure of the physical universe described by RWH in their description—be this in the intentions of an evil demon, or in the programming of a computer—since these structures explain our experiences in the simplest way. In doing so, they will all be as complex as RWH. But while RWH stops here, SH’s will also have to specify whatever thing realizes this structure, which leads to skepticism. For example, a brain-in-a-vat hypothesis will need to claim that there are scientists, a vat, etc., AND that the scientists stimulate your brain as if it were in RWH. So all the information of RWH is contained in SH, and then some.
We might think that agential SH’s avoid this problem because we only need to stipulate an agent, and then we get the details of the world for free, due to their voluntary action—for example, we only need to stipulate an evil demon, and since it freely chooses to deceive us, we get the appearance as of the world for free. But this just jumps straight back on the first horn. After all, the demon might have any number of intentions, and the set of experiences it could cause us to have is much larger than the set of experiences RWH might cause, and so without some auxiliary hypothesis that favors our experiences, it is strongly disconfirmed by our experiences. Adding auxiliary hypotheses—such as that the demon desires to make us think that RWH is true—will just decrease the prior instead, so either way it is worse than RWH.
Another worry is that even if each individual skeptical hypothesis is less likely than RWH, there are just so many potential skeptical hypotheses, that we should overall suspect that we’re in a skeptical scenario, even if we don’t know which one. I don’t think this is right. If we have one simple hypothesis that explains some evidence, and many, many alternative complex theories that explain the same evidence, we should generally prefer the single simple theory to the disjunction of all the complex theories. For example, if we find Jones’ DNA on the murder weapon, and have eyewitnesses saying he was the murderer, we should think that it’s more likely that he’s the murderer than not, even though there are infinitely many hypotheses that explain the evidence without him doing it.
There is one type of external-world skepticism that this doesn’t defeat, namely those skeptical hypotheses where we can no longer trust our mathematical, logical reasoning, abductive, etc. reasoning. Descartes’ classic evil demon is such an example. The problem is that if I cannot trust that 2+2=4, or that modus ponens is valid in classical logic, then I can no longer trust the argument I make against the skeptical hypothesis—if the skeptical hypothesis were true, I would believe the argument to be sound, even if it’s totally incoherent.
One potential response is that these hypotheses cannot be coherently formulated or taken seriously. After all, we will need to use logical connectives to specify the content of the skeptical scenario (e.g. there exists an evil demon, and if it desires to give you belief X, you will have belief X). So in considering the hypothesis, we are presupposing its falsehood, meaning we can’t really think coherently about it. For that reason I also don’t think it’s a vice of the response that it cannot rule out these SH’s—literally no reasoning could.
Inference to the best explanation also doesn’t rule out what we might call “serious” SH’s, which are those that are actual contenders for viable theories of the world. One such hypothesis is the simulation hypothesis, which—while probably false by my lights—does have serious arguments in favor of it. Exactly because it actually does the hard work of defending itself in the arena of worldviews, it isn’t easily ruled out by inference to the best explanation.
These caveats actually just give me a stronger sense that this is the correct answer to skepticism. The fact that it explains why we can be confident we aren’t in strange skeptical scenarios that are obviously false, while it cannot by itself rule out skeptical hypotheses that we shouldn’t expect to be able to discard, is pretty compelling evidence that it is right. Anyways, that’s all I have to say.
You Might Also Like:
The Veil of Perception for Direct Realists
One common argument against indirect realism is that it invokes a sort of veil of perception: you only ever perceive the world through some perceptual intermediary. The problem is then supposed to be that it’s hard to justify belief in an external world, since you are never “in contact” with it—there is no way of knowi…
The Do's and Don'ts of Moorean Shifting
Probably one of the first responses to external world skepticism you will come across is G. E. Moore’s quite straightforward proof: *raises one hand* “here is one hand” *raises the other* “and here is another.” While thi…
Strictly speaking, I think the hypotheses formulated as generally as they are here, are too vague to make any predictions—so the discussion only really makes sense when we get to the second horn.
That the universe is very old might also be included here.
When I was a kid, I actually thought this was how they made video games, lol.
I don't buy your claim that RWH is somehow more parsimonious than SH. it's easier to build a video game than it is a universe. With RWH you also need to explain both epistemology and ontology, whereas for SH, you just need epistemology and don't need to make ontological claims.
My sense is that many people bring up SHs, not to convince you that they're more plausible than RWHs, but to have non-trivial amounts of plausibility and therefore ruin our ability to have certain knowledge of the external world. Best explanation arguments help us find the most plausible theories, but they don't reduce opposing theories' plausibility to zero.
I'm not myself a skeptic, though. It seems to me like the best way to tackle this problem is to argue against the notion that certain knowledge is only possible if all opposing theories have zero (or comically low) probability.
PS: The Descartes evil demon counterargument doesn't work. It could be that you do not need logical connectives to formulate the SH, and the demon is just tricking you into thinking that you do.