20 Comments
User's avatar
Bob Jacobs's avatar

I'm not an infinitist, but I think there's a possible counterargument here: Finite minds can *represent* infinite structures via finite rules, in the same way that we can finitely define an infinite sequence like:

"For all n ∈ ℕ, Rₙ = ‘I believe Rₙ₊₁ is a good reason for Rₙ’”

Even though we can’t write out every member of the infinite chain, we can formulate the *schema* of the chain. Just like how the the function f(n) = n + 1 defines an infinite sequence of numbers with a finite function.

You might object that this is similar to the number multiplying, and I think in some instances it will be, but in others not so much. I indeed can't multiply any two numbers. However, I can +1 any number, so if the chain has a structure like that, I could do it. Also, and maybe this is too meta, but maybe being able to multiply any two numbers is unnecessary, maybe all you need to do is being able to write a mathematical proof that any two numbers are multiply-able (or something similar for epistemology).

Expand full comment
Silas Abrahamsen's avatar

That's an interesting reply! I'm not sure exactly how best to respond, though I feel there's something wrong.

One thing might be that it's not clear whether that's actually giving you infinite entries on your list of reasons, or just a single one. Like, if I can support my belief in P with a rule that describes how I would respond if you asked "why" and then how I'd respond if you asked "why" to my answer there, etc., then it just seems like my justification lies in the rule, not in each member of the chain.

It's hard to give an example, as I don't really even know what an infinite chain of justification would look like. But maybe something like this is close enough to be analogous: If you ask me to explain how I know there's an external world, and I argue that all the infinite possible alternative hypotheses would either predict my experiences less or have a lower prior, it seems like my justification comes from the argument itself, not from hypothesis A being worse than the real world hypothesis, hypothesis B also being worse, etc. ad infinitum.

Also it's not obvious that simply having the rule is strong enough for accessibility in the relevant sense. If I can give you a rule for how I'd respond, but then once you ask me for my reason number 159837382, my head would turn into a black hole if I tried to even think about it, I'm not sure I can use that reason for justification.

A final worry might be that the chain of justification is rarely ever going to be so straightforward. If you keep asking me "why" I doubt we'd ever get to a point where my answers simply followed a predictable pattern forever, so that you didn't need to ask me in order to be sure what I would answer. But that's maybe a separate argument.

Expand full comment
Bob Jacobs's avatar

Hmmm, well think about moral epistemology. If you are uncertain about morality, if there are many different moral theories that seem plausible to you, you'd need a theory of "moral uncertainty", a rule that helps you select between these different theories.

Say you are convinced that "maximizing expected choiceworthiness" (a version of maximizing expected value) is the best procedure, picking the theory that seems the highest value. Great, now you have a procedure to pick the best moral theory, but how did you pick that theory of moral uncertainty? There are also other theories of moral uncertainty, say "randomization" (picking a theory at random). You need a theory, of how to pick a theory, of how to pick a moral theory... and then how did you pick *that* theory? This is an infinite regress problem.

However, if you think 'Maximizing Expected Choiceworthiness' is a great way to select theories, and the rival theory of 'randomization' isn't, you can just say: To select my theory of moral uncertainty I choose MEC over randomization. To pick which theory to pick theories I still use MEC (applied to first level, so "meta-MEC") over randomization (or "meta-randomization"). To select *that* theory I choose meta-meta-MEC over meta-meta-randomization, then meta-meta-meta-MEC... hey I think I can write this down as a function: For every additional level, add one layer of metaness to the MEC.

Now we have infinitism without a cognitive overload.

Expand full comment
Silas Abrahamsen's avatar

I actually have a post in the works on this exact type of regress problem, lol (though what I say there probably isn't too relevant here).

I guess I would still think here that my justification for believing some choice is right isn't that MEC tells me, and then meta-MEC tells me to prick MEC, and meta-meta-MEC... etc. Rather my justification is that an infinity of orders of meta-meta-...-meta-MEC's would advise me to make this choice. So it's really the rule that "for any order, believe MEC at that order" that is doing the justification.

And my justification for following this rule is simply gonna be some arguments in favor of MEC (something about dominance, dutch books, etc.)

Expand full comment
Bob Jacobs's avatar

Isn't this still a version of infinitism, since you need an infinite chain rather than a ground truth or a loop. E.g. say you see utilitarianism as a "brute fact" or "self-justified", then you have a foundationalist model of moral epistemology. If you have a a loop (e.g. utilitarianism is the best because MEC selected it, and MEC I'll defend on utilitarian grounds) then you have a coherentist moral epistemology. But if you have an infinitely recurring chain of MECs you'd have, I would presume, an infinitist moral epistemology.

Expand full comment
Silas Abrahamsen's avatar

I don't think it is. Perhaps an analogous example: Suppose I think that the world is held up by an infinite stack of turtles. Then the reason turtle 1 is held up is due to turtle 2, etc. But I can still believe this and be a foundationalist: I believe the rule that "for any turtle n, turtle n+1 holds it up." When you ask me about any particular turtle, I will cite the turtle below it--so the ontological chain is infinite.

But my epistemic justification chain isn't: The reason for believing this about any particular turtle is my reason for believing the whole stack to be infinite (perhaps God told me, or I made an inductive inference from looking at many of the turtles, or whatnot). And so I might believe in this infinite chain, and still have my reasons terminate in some foundational beliefs.

Now you're more familiar with the normative uncertainty literature than I am, but I take it that the idea isn't generally that my reason for believing in some lower-order theory is some higher-order one. Rather the higher-order theory is supposed to justify the lower-order one in some normative sense (it's right/rational for me to do X because I have a high credence in utilitarianism and my favourite theory is correct, and meta-my favourite theory is correct, etc.)

The actual way I justify believing in MEC, or utilitarianism, or my favourite theory, or whatever, is that I consider certain cases (like ethical dilemmas) or structural features (we want dominance or non-fanaticism). But this looks a lot more like how I might justify my belief in an infinite stack of turtles--and not like I justify each layer with the layer above.

Expand full comment
Bob Jacobs's avatar

The turtle is disanalogous to recursive MEC, because turtles are concrete objects, which (moral) epistemology is not about, whereas MEC is a theory, which (moral) epistemology is about. So if I believe the universe is filled with an infinite sequence of Higgs bosons, that doesn't automatically make me an infinitist. I could e.g. work at CERN and work with a standard physicists framework (non-infinitist). What would something need to be, to be infinitism? According to wikipedia:

"Infinitism is the view that knowledge may be justified by an infinite chain of reasons. [...] Traditional theories of justification (foundationalism and coherentism) and indeed some philosophers consider an infinite regress not to be a valid justification. In their view, if A is justified by B, B by C, and so forth, then either

The chain must end with a link that requires no independent justification (a foundation),

The chain must come around in a circle in some finite number of steps (the belief may be justified by its coherence), or

Our beliefs must not be justified after all (as is posited by philosophical skeptics)."

In other words, it's the *reasons* not the objects. For infinitism R₀ ← R₁ ← R₂ ← … (no circle, no endpoint), which is what I sketched out with my tower of MECs: every reason is itself supported by another reason, no metaₙ-MEC is treated as self-evident because each is justified *only* by the next level. The schema does not replace the chain, it *describes* it. Now what you say about dominance is true, but that's *regular* epistemology, not *moral* epistemology. So the tower of MECs is an infinitist moral epistemology, not an infinitist (regular) epistemology (since it doesn't talk about justification *in general*).

Expand full comment
Kaiser Basileus's avatar

Deontolgy, consequentialism, and virtue ethics are each insufficient. What is needed is a meta-ethic of priorities to govern the rest.

Expand full comment
Kaiser Basileus's avatar

Why questions are always either How?, which is explicitly empirical - scientific, or From what intent/to what end?

Expand full comment
Tower of Babble's avatar

I guess I don’t feel the plausibility of the principle about changes to the mind having a finite lower bound. It seems like we can think there is no finite lower bound to the changes that can be made to thought states, but that the smaller the change made (say, in your neurons or what have you) the smaller the change in the thought state will be. So thought X looks almost identical from the inside to thought Y where Y is X with some Planck level difference, but there is some minute alteration (maybe even one you don’t notice). I just don’t see why that’s super implausible.

Expand full comment
Silas Abrahamsen's avatar

Perhaps, but I think there are two sorts of things to say to that:

1) It just seems like there are gonna be some plausible natural breaking-points for where small differences make a difference to your thoughts (like whether a neuron fires, or whether some chemical reaction proceeds or not).

2) Even if our thoughts are continuous, strictly speaking, I think arbitrarily small differences aren't going to matter under justification. Copying my own example: If I think "Socrates is mortal" now and then think "Socrates is mortal" now, those two thoughts were somewhat different, and the results of different brain states. But they're plausibly not different enough to count as distinct entries on a list of reasons, so that using the one to support the other wouldn't be circular. Much less if my brain were in literally identical states when I thought the two thoughts, except for the minute position of a single particle or whatever.

Expand full comment
Tower of Babble's avatar

For 1 it just doesn't strike me as that plausible, but maybe I'm just not considering the case properly.

For 2, this seems right to me but I'm not sure why we would think that the small differences *couldn't* make a difference wrt the content of the thought, unless we appeal to the broader principle you defend but that I reject from 1.

Expand full comment
Silas Abrahamsen's avatar

I'm not sure I quite understand what you mean for 2? Is the idea that what I said is right in certain cases, but in some cases any arbitrarily small difference in the thought is sufficient to count as a new distinct reason?

Expand full comment
Tower of Babble's avatar

So for 2 the idea is that I don't see why an arbitrarily small change in a thought could not constitute a new reason. I think you principle provides a reason why that could not happen, that's why 2 only comes into play if you are also dubious of said principle.

Expand full comment
Joe James's avatar

Seems like you’re arguing that infinitism collapses into circularity (not that your argument is circular, but that this thing they call infinitism is actually just circularity if you look it under a microscope). Because there’s a base amount of brain power/space needed for cognition that cannot be reduced. I’m 70% sure I’m misreading that though (or at least this idea popped in my brain while reading).

Expand full comment
Silas Abrahamsen's avatar

Well, I'm not exactly trying to diagnose what infinitists should think the structure of justification is. The point is simply that it certainly can't be infinite chains--at least if any beliefs are justified.

Expand full comment
Kaiser Basileus's avatar

For me, about .88 at a time. Before i get a good handle on it, i’m already off to the next one.

Expand full comment