How Many Thoughts Can Fit In Your Brain?
A concise argument against infinitism in epistemology
The following is an argument against epistemological infinitism, which I thought up during a lecture on justification. Infinitism holds that the structure of justification is infinite regresses, rather than circular chains (coherentism), or finite chains with non-inferentially justified beliefs (foundationalism). So for a belief to be justified there must be an infinite, non-repeating chain of reasons accessible to you, which justify each other up to the belief in question. This position is very unpopular historically, though there are some contemporary defenders.
My argument turns out to be a version of the finite-mind objection, which isn’t particularly groundbreaking, though I think this formulation is somewhat unique—and more forceful than usual formulations. Here I try to formulate and defend it as concisely as possible:
The Argument
The first step is that not any arbitrarily small difference in the state of your brain (or whatever else is responsible for your thinking) could amount to a difference in your thinking. It’s maybe not immediately obvious what I’m saying here.
Basically: Take any state of the brain. With this state some thinking will be associated. The idea is then that to have had some different thinking it would be necessary to change the state of the brain some amount, and in every case there is a lower bound (even if vague) on the amount needed. So you could not make the change arbitrarily small and still have a different thought.
I think this is highly plausible. To deny it, it must be the case that some change could be relevant, no matter how small. But then it might be that, say, a difference of a planck-length in the position of a particle in your brain (or an equally minute difference in the quantum fields or whatever) could make a difference to what thought is realized by the brain. Not just that, infinitely smaller differences—smaller than what physics can currently describe—must matter as well. Surely not!
If the above is right, then that means that the states of the brain relevant for thinking (call them thought-states) can be discretely described. The second step is that when considering which possible thought-states count as “available” for justification, there is some finite bound of how different the state may be from your current brain state.
Take a reference frame with the origin fixed inside your brain, and draw a sphere with a radius of N meters around the origin (doesn’t matter how big N is, so long as it’s finite—it could be a googolplex or whatever). The idea is then that the relevant thought-states must be able to fit within this sphere.
But if the thought-states can be described discretely, and there is only a finite space within which they matter, then there are only finitely many possible thought-states for any brain within a given period of time.
Infinitism claims that for a belief to be justified, there must be an infinite chain of reasons available to justify that belief—and these reasons cannot appear more than once, lest the infinitist become a coherentist. But per the above, there are only finitely many possible thoughts a brain could realize, and so there must come a point in the chain after which none of the further reasons are possible for you to think about.
Let’s then finally add a principle: For a reason to be available to you for justification, it must at least be possible for you to think about it.
It then follows that none of your beliefs can be justified on infinitist grounds.
Objections
Accessibility
An infinitist would likely first attack the idea of accessibility I’m employing. Rather than accessibility requiring in-principle possibility, it has some much weaker criterion.
The best candidate I can think of is that for a reason to be accessible, we must simply have faculties with the powers to formulate the reason in principle, even if we could never actually do so. That is, I have the faculty to multiply numbers, so even if I could not possibly represent and multiply some two numbers in my head, I have capacities that could, in some general sense, perform the operation.
But I mean, surely I don’t have some abstract power like this; I surely don’t have faculties with the power to solve the game of Connect Four without any assistance whatsoever. I of course have the ability to perform different mathematical operations. Still, it’s not that I have some abstract number-manipulating power. Rather I have neurons that can fire in certain patterns given certain inputs, and these can get right results in many possible cases—in many other cases, however, my brain is simply too limited to represent the things involved whatsoever. So even under a weaker definition, it just does not seem like we could possibly have infinitely many accessible reasons.
Thinking Over Time
My argument also seems to assume that a reason must be thought at a single instant, but that’s surely wrong! I mean, reasons are often arguments, and if an argument is long enough, it’s impossible for me to conceptualize it at the same time. Still it’s certainly accessible for justification.
But even taking time into consideration, things look no better: For any finite amount of time, the number of thoughts you could possibly have is still finite. Our life on earth is also a finite amount of time, and if some reason would take me at least Graham’s Number years to think (as must be the case for infinitism), it’s certainly not accessible! I’d be dead a million-billion times over before I had even begun conceptualizing it.
Non-Physical Minds
I’ve been talking as if we are just our brains or something, but what if we’re non-physical? Then you can’t just quantify possible thoughts by quantifying brain states. However, even if our minds are non-physical, whatever metrics we can describe them on are still surely discretely describable for our purposes.
I mean when I think “Socrates is a man” now, and then think it again now, my experiences differ ever so slightly. Yet it’d be ridiculous to say that I have given two distinct reasons for why Socrates is mortal, and that justifying the one with the other wouldn’t be circular. If that’s so, there’s some limit to how little thoughts can differ from each other and still count as separate reasons.
Perhaps there’s an afterlife where we get infinitely enhanced cognitive capacities, but similarly to the last objection: If I have to die, go to heaven, and receive drastically different cognitive capacities from God before I can justify my belief, the reason given is definitely not accessible to me!
Externalism
Finally, it seems like I’ve been presupposing internalism about reasons. But perhaps reasons depend on what goes on outside our heads, in which case simply focusing on the brain won’t do.
To accommodate this, we just look at the system you’re a part of a whole—we can simply expand the sphere that was initially meant to include your brain, and make it include the external environment as well. Perhaps we make it a trillion light-years in radius or something, such that there’d surely be no way anything outside it could affect your reasons. It still looks very plausible that arbitrarily small differences won’t result in different reasons, and so the argument still goes through.
I don’t know how decisive this argument is, but it feels pretty compelling to me.
You Might Also Like:
I'm not an infinitist, but I think there's a possible counterargument here: Finite minds can *represent* infinite structures via finite rules, in the same way that we can finitely define an infinite sequence like:
"For all n ∈ ℕ, Rₙ = ‘I believe Rₙ₊₁ is a good reason for Rₙ’”
Even though we can’t write out every member of the infinite chain, we can formulate the *schema* of the chain. Just like how the the function f(n) = n + 1 defines an infinite sequence of numbers with a finite function.
You might object that this is similar to the number multiplying, and I think in some instances it will be, but in others not so much. I indeed can't multiply any two numbers. However, I can +1 any number, so if the chain has a structure like that, I could do it. Also, and maybe this is too meta, but maybe being able to multiply any two numbers is unnecessary, maybe all you need to do is being able to write a mathematical proof that any two numbers are multiply-able (or something similar for epistemology).
I guess I don’t feel the plausibility of the principle about changes to the mind having a finite lower bound. It seems like we can think there is no finite lower bound to the changes that can be made to thought states, but that the smaller the change made (say, in your neurons or what have you) the smaller the change in the thought state will be. So thought X looks almost identical from the inside to thought Y where Y is X with some Planck level difference, but there is some minute alteration (maybe even one you don’t notice). I just don’t see why that’s super implausible.