An intuitive view of what arguments are supposed to do is that they rationally convince us of new things, leading us closer to the Truth™. I think this is to some extent correct, but it may also lead us to be too optimistic about the power of them. In reality I don’t think they can always adjudicate disputes, and they can never rationally convince us of anything we’re not already implicitly committed to.
Some parts of this post are the sorts ideas where things mostly fit together in the end. For this reason there are some points where I’ll argue from a supposition that I’ll later reject—to go all wittgensteinian on your ass, I could call it throwing down the ladder I just climbed. Anyway, the thing I’ll be arguing (at least to start off with) is that there will be many cases of disputes where no amount of argumentation could rationally convince one interlocutor to agree with the other—that is, they’d always be rational in disagreeing.
This is in some respects similar to Nathan Ormond’s post on Australian person Graham Oppy’s views about arguments. Likewise, some of the impetus for this was a brief exchange we had about intuitions on his post on the topic.
Deductive Arguments
Suppose that you give me a successful deductive argument, by which I mean one that will require me to change my view about something. Such an argument will have premises that I accept and a conclusion I don’t (and is valid, obviously):
Wonder and Aporia is the best blog on the internet (accept)
You should subscribe to the best blog on the internet (accept)
Therefore, you should subscribe to Wonder and Aporia (deny)
In the face of this, I’ll have to revise at least one of my beliefs, since the premises cannot be true and the conclusion false, and so I’ve rationally been convinced to revise a belief (presumably 3 in this case):
Notice what happened here: there were some of my beliefs that were inconsistent, and thus I needed to revise at least one of them to resolve the inconsistency. Note further that this is the only type of case where I should be rationally convinced by a deductive argument—in all other cases, I either already accept the conclusion or I don’t accept at least one of the premises, and can thus reject the argument. This means that the only way I could ever be convinced of something new by a deductive argument is if some of my beliefs are logically inconsistent.
I think it’s too simplistic to just talk about beliefs and assent here. We don’t just hold binary attitudes of belief and disbelief towards propositions. Rather we have varying degrees of credence in propositions. Thus we can’t just talk about believing the premises and not believing the conclusion in a successful argument—we have certain credences in the premises and the conclusion. More specifically, an argument will be successful on me if my credence in the conjunction of the premises is higher than my credence in the conclusion.
Let’s now imagine that I have infinite brainpower (not too hard to do). In that case, I’ll realize all the entailments of all my beliefs/credences, and so be able to root out any inconsistencies. When this is done, you’ll no longer be able to present me with a successful deductive argument, since there will be no deductively valid argument where I have a higher credence in the conjunction of the premises than in the conclusion. When this is done, my credences will be coherent, and so I’ll call this web of credences my coherent web of credences.
But what does this have to do with how things actually are? Well, when a puny mortal like me with limited brainpower is presented with a successful deductive argument, I’ll resolve the inconsistency, and in doing so I’ll (hopefully) move one step closer to my coherent web of credences. More importantly, considering a deductive argument can never rationally move me to a place where I can’t rationally reach that coherent web of credences, since rationally considering arguments only moves me closer to having consistent credences. This means that there’s always in principle a(n infinitely long) route from my current web of credences, to my coherent web of credences, through considering a bunch of deductive arguments—namely the route I would take if I suddenly acquired infinite brainpower—and the end destination, the coherent web of credences I would hold, is never changed by my considering an argument.
Crucially, there seems to be no guarantee that my coherent web of credences will be the same as yours, and mine could very well give a very low credence to a proposition we’re discussing, while yours could give a very high one. This would mean that any rational discussion about that proposition is doomed to intractable disagreement, such that it would not be rational for either of us to end up agreeing with the other, and the only way for us to reach agreement would be for at least one of us to be irrational.
I have here been talking as if there’s only a single coherent web of credences that it’d be rational for you to end up having, but it’s not at all clear that this is the case. What this requires is that there’s only a single way it would be rational for you to update your credences in the face of a successful argument. A reason to doubt this may be if we think that there is no rational way to update your credences when you realize that you’ve been inconsistent. This could for example be because epistemic rationality simply means having a consistent set of credences (as well as correctly updating on new evidence). I’d say that this makes rationality a pretty useless term, since no one is, ever has been, or could ever hope to be rational. But to avoid getting bogged down by definitions, I’ll coin the new term, “schmational,” which refers to the epistemically correct way to update your credences to resolve inconsistencies. That is, rationality is a property of webs of credences (namely those that are consistent (perhaps including further conditions we’ll get to)), and changes to these in the face of new evidence. Schmationality, on the other hand, regards the changes to webs of credences when they are not yet consistent (and absent new evidence).
It seems pretty clear to me that there is such a thing as schmationality. For example, in the face of the argument from before, it seems like I would be messing up very badly if I changed my web of credences to be a credence of 1 in the proposition “there exists a red ball” and set all my other credences to the minimums consistent with this, given my observations. But if there’s no such thing as schmationality, there would be nothing wrong with this, and I would in fact end up being rational.
I think a plausible contender for schmationality, then, is a sort of “path of least resistance,” meaning that you should change your credences as little as possible to resolve the inconsistency. This will not simply involve raising your credence in the conclusion and lowering your credences in the premises, such that they’re consistent and the change is minimal. This is because each of the propositions featuring in the argument will also feature in all sorts of other arguments—that is, they’ll have connections with other parts of your web of credences. So you should really minimize the change to your entire web of credences, which will not always be what minimizes change for the propositions in the argument. It isn’t important here that this is the correct account of schmationality, as long as there is such a thing.
Through this method there might still be several different coherent webs of credences I could schmationally reach from any given inconsistent set of credences given this account,1 and so the assumption from before that there’s only a single such web is false. But it’s still true that if a web of credences would be ischmational for me to reach at some point, it will always be ischmational for me to reach (except if my credences are changed through non-rational means). After all, if a web was previously ischmational for me to reach, that would mean that it would require an ischmational step to get there. But a schmational step is not an ischmational step (in case you were wondering), and so schmationally resolving an inconsistency can only take away from the set of coherent webs of credences you could schmationally reach, not add to it.
How does this change the thesis in question? I think it somewhat widens the possibility for agreement, since there are now more possibilities for it through schmational updating. But there’s still no clear guarantee that the sets of coherent webs of credences available to each person will converge on any given proposition—especially if the people in question have spent a lot of time thinking about the questions, and thus narrowed down the possibilities.
A revised thesis will then be something like: in some (perhaps many) cases, two people will have webs of credences such that there is no schmational way for them to reach coherent webs of credences where they agree about some proposition in dispute. Furthermore there will be many more ways the two parties will be schmational in ending up disagreeing, and in which they could have no schmational objection to each other’s behavior.
I still think this is pretty radical in that it will (probably) often be the case that you could never schmationally reach agreement with someone. In fact this simply leaves room for more potential rational disagreement than if there was only ever one coherent web of credences available to a person—even if it also leaves room for more potential rational agreement—and makes it harder for you to convince me of something if I don’t want to be convinced.
As a side note, this doesn’t turn on there being some correct account of schmationality. If there is no correct account, then I’m never schmational in updating my credences, and so the above is trivially true. More generally it would make any disagreement between parties where they are not both already perfectly consistent trivial, since there could be no schmational objection to any way of updating credences in the face of inconsistency.
It should be clear by now that the title of this post is, let’s say, not quite accurate. Obviously it’s possible to rationally (or schmationally) convince people of things they didn’t already believe, since most (probably all) people have inconsistent credences in some areas. But there will be many cases where you cannot convince someone of something you’re rational in believing, no matter what arguments you give, and even more cases where someone can be rational in denying it no matter what.
Inductive and Abductive Arguments
This only covers deductive arguments though, so what should we say about inductive and abductive arguments? Well, I think we can simply translate these into deductive arguments. For example, I might consider the inductive argument:
Every previous post from Wonder and Aporia has been amazing.
Therefore the next post from Wonder and Aporia will be amazing.
This is obviously not deductively valid, though given some background assumptions about induction, I will assign some pretty high probability X in 2, given 1. But now I can simply reformulate the argument to capture this:
Every previous post from Wonder and Aporia has been amazing.
If every previous post from Wonder and Aporia has been amazing, then there is a probability of X that the next will be amazing.
Therefore there is a probability of X that the next post from Wonder and Aporia will be amazing.
Whatever background assumptions you have about induction will be captured in 2. You might of course not be 100% certain of a specific view about induction. Suppose that you are 50/50 between induction skepticism and induction working (and say these are the only options). The former corresponds to the premise 2.1 where X=0.5 and the latter premise 2.2 where X=0.99 . In this case, you will simply have a credence of 0.5 that X=0.5 and a credence of 0.5 that X=0.99, meaning your overall credence that the next post will be amazing is 0.745.
This does not mean that the overall argument leads you to accept premise 3 with an X=0.745, but simply that you will be 50/50 between 3.1 where X=0.5 and 3.2 where X=0.99. There is then a separate proposition, A, that the sun will rise tomorrow, in which you’ll have a credence of 0.745. Thus premise 3 should not simply be treated as the proposition, A, with a credence of X assigned to it. Rather it should be treated as a separate proposition, B. Though it must hold that P(A|B)=X, so they're obviously not wholly unrelated.
With this in place, it looks like we can fit inductive arguments into the framework from before, and so the same conclusions hold. What about abduction? I think the exact same thing can be said here. For example:
Jones’ fingerprints were found on the murder weapon and he has been recorded on CCTV at the crime scene.
If Jones’ fingerprints were found on the murder weapon and he has been recorded on CCTV at the crime scene, then there’s a probability X that he’s the murderer.
Therefore there’s a probability X that Jones is the murderer.
This just exactly parallels induction, and 2 captures your views about theoretical virtue, how the observations support the hypothesis, etc. So neither type of argument changes our thesis.
New Observations
This has all been under the supposition that you don’t get new evidence, but what about if I make a new observation? I think this is the only case where you can actually meaningfully change your views in a rational way. I mean, new observations should often affect your credences, and since you haven’t made them yet, it’s just trivial that they could change your views. How you update on any given observation will still be fully determined, holding your prior web of credences fixed, since the way you update (assuming you update rationally) is wholly determined by your priors. But again, you can’t know beforehand which observations you’ll make.
In any case, many philosophical disputes will not be able to be settled through new data. Whether or not universals exist, utilitarianism is true, or there’s an external world cannot be adjudicated through experience, and so what I have been arguing at least applies to these sorts of questions.2
Foundationalism, Coherentism, and Phenomenal Conservatism (all the -isms)
The sort of approach to thinking about our beliefs that I’ve laid out here may actually provide a nice framing for the disagreement between foundationalists and coherentist about justification. Coherentism roughly holds that beliefs are justified by belonging to a coherent system of beliefs. Foundationalism roughly holds that beliefs are justified through being justified by foundational beliefs. There are many different species of these positions, but this is the general outline. It then looks like we can point to the following difference between the two: coherentism doesn’t give a criterion for favoring any coherent web of credences over any other, whereas foundationalism does.3
This foundationalist criterion for choosing between systems will then be the foundational form of justification. For example, a strong foundationalist might have the criterion that we should only accept webs of credences that assign a credence of 1 to certain propositions, e.g. “I exist.” There’s a bit of a mystery here, since the foundationalist criterion is supposed to choose between webs, but it will surely itself be a belief which will belong to a web. This is sort of right, but the idea is that foundational beliefs are, well, foundational, and so while our other credences should obviously cohere with them, we can hold them fixed. Now, foundationalism isn’t accepted by everyone, so there can obviously be some doubt about it. How could a foundationalist criterion then be right, if there can be doubt about it? It’s here important to distinguish the use of the criterion from belief in the theory. If some sort of foundationalism is true, then we are justified in using the foundationalist criterion for choosing between webs of credences—whether or not we’re ourselves foundationalists—but the foundationalist theory will itself be a proposition further into the web of credences which we can doubt. In other words we don’t need to accept the correct theory of justification to be rational in using it.
I think that coherentism is a perfectly fine resort if there’s no good foundationalist candidate, but I think it doesn’t track as well with what we (or at least I) would want from our concept of justification as foundationalism does. If we can give a good way to adjudicate between webs of credences and make our theory more strict, then that seems preferable. One reason to think this might be that the goal of epistemology is getting true beliefs and avoiding false beliefs, or something to that effect, and so the stricter restrictions rationality puts on our credences, the less opportunities you have for getting false views of the world if you’re rational.
One candidate here (and one I tend to lean towards) is phenomenal conservatism (PC)—the view that seemings provide defeasible justification for their content. Which coherent web of credences should you hold? PC tells you to hold whichever seems right. More specifically, you should hold the the web that best coheres with your seemings, disregarding those seemings that have defeaters in that web. What is the alternative to this? Well, it’s that you shouldn’t hold whichever seems right (or being more careful with the scope of the not-operator here: “it’s not the case that you should believe whichever seems right”). But when given the choice between believing something that seems right and something that doesn’t seem right, I just think you’re going wrong in choosing the latter, unless you have some independent considerations.
We may think that we can’t actually choose between coherent webs of credences, since we just start with an incoherent web, and then schmationally update it until it’s coherent, meaning there is no point where we decide between webs and choose the most intuitive one—in fact the one we currently have will probably be the one we find most intuitive.
Firstly, we should remember that schmationality is only the correct procedure for removing inconsistencies, but that doesn’t necessarily mean that there can’t also be a further criterion for choosing between consistent webs. Secondly I think we should distinguish seemings from credences. I may be able to work out a coherent set of credences, but seemings are separate from these, and even if I end up with a coherent web, I might come to consider another one that’s more in line with my seemings. In this case I think I should favor the latter. If it’s true that whichever view I actually hold must also be the one that seems most right, then I guess we just trivially arrive at PC. But I don’t think that’s actually the case, since many non-rational factors, like wanting something to be true, might cloud our judgement and make us pick another coherent web of credences than the one that actually seems right.4
There is something of a stream in philosophy, including fellow substackers Lance Bush and Nathan Ormond, that questions whether there really is such a thing as a seeming, and is in general very skeptical of seemings-based epistemology. I think there is an unhelpful tendency to simply handwave and ignore this sort of criticism, especially here on Substack, so I’ll at least attempt to give an account here.
I think that seemings are a distinct—one may even say sui generis—sort of mental state that have something like propositions as their content. Given the nature of mental states, I can’t really fully explain it, but mostly gesture at it, and describe it with reference to other mental states. One way to do this is to hopefully trigger an example in you. I think a good one is the Müller-Lyer illusion:
When you look at these lines, it probably appears/seems to you that the top one is longer than the bottom one. Spoiler alert: they’re the same length. Even though you probably believe this, they will still appear to be different lengths. What’s happening here? It’s not that you actually believe that the lines are different lengths. Likewise I don’t think what’s happening is simply the same as being disposed to believe the lines to be different lengths; if I had previously seen the lines, and didn’t realize that they were the same length, I would now be disposed to believe them to be the same length, even if I’m not currently looking at them or thinking about them. But the phenomenon in question only happens when I’m actually looking at or considering the lines—again, it’s a sort of mental state. What I think this is, is a seeming.
The scariest sort of seeming is so called “intellectual seemings,” or intuitions. I think these are simply the equivalent of the above, but for considering propositions, rather than looking at lines. When I consider the following:
It’s wrong to pull the skin off of cats for fun.
A simpler theory is preferable to a more complex one, all else being equal.
It cannot be vague how many things exist at a given time.
Subscribing to Wonder and Aporia is a categorical moral duty.
They all just seem true in the same way that the above lines seem to be different lengths. Just like with the lines, I can obviously find out that they are wrong through other considerations (except for 4), but they will probably still seem true, even if I come to believe they’re false. Maybe you still just don’t find these seemings when you introspect, and if that’s the case then the best I can do is simply to give you more examples or give more comparisons, but I certainly think I have the sorts of attitudes I’m describing, and it seems (sorry) to be the same kind of attitude in the case with the lines as in the case with propositions.
To tie it back, if I consider something like 3 and it strongly seems true, I think something has gone wrong if I’m 50/50 between it and it’s negation, if I haven’t considered any other things relevant to it’s truth. I would seem to be making a mistake or be acting irrationally if I said “yes, I have a very strong intuition that it’s true, and I have no reason to doubt it, but I’m just as confident that it’s true as that it’s false.” You can obviously have some higher-order doubt, given that it’s a very abstract area and you expect that many things could be said either way that you haven’t considered, and this may partly provide a defeater, but I just think you’re doing something wrong if you aren’t at all moved to believe something when it seems very strongly to be the case, and there aren’t defeaters.
If this is right, then I think it’s clear how we can get to PC. When considering coherent webs of credences, you should be moved to be more confident in the web that on balance seems right to you (of course taking account of defeaters), and since you can make no argument for or against any given coherent web of credences, and since you’ve already taken account of defeaters, it looks like there is no reason not to follow these seemings. I don’t think this necessarily gets us all the way to PC, but it at least provides some reason in favor of it by my lights. This is of course a very large discussion, so I won’t say more here, but I might write more on it in the future.
As a completely unrelated side note, all of this (i.e. the stuff about disagreement) also takes some of the sting out of arguments from disagreement against moral realism, since genuine disagreement in ethics will be exactly similar to other kinds of disagreement: two people will have different coherent webs of credences, and they can say nothing to convince each other. I think that the path from our ordinary, everyday commitments to our fundamental commitments is probably just shorter in ethics than in other cases, making the intractability of disagreement more obvious, even though it isn’t different in kind.
Conclusion
All this time I’ve been talking as if we’re all number-crunching credence-robots, who go around assigning precise real-number values to our credences in propositions. Just to be clear, I don’t actually think we’re like that—I probably couldn’t tell you my credence in most things with more accuracy than 0.3 or something. But even though we aren’t living epistemology-calculators, I still think the best approach is to first look at what an ideal rational agent would do, and then try to translate that into what we should actually do.
Maybe all of what I’ve been saying is super trivial, or maybe it's obviously super wrong—it’s honestly hard for me to tell. But I think this is a neat way to think about the structure of our beliefs anyway.
I have not actually worked this out, but I assume that there will be many cases where there are several different options that are equally good. If that’s not true, then so much better for me.
I should add here that agreement/disagreement can provide some higher-order evidence in pretty much any dispute, and so no question will be completely free from the influence of experience.
I don’t think this will capture the entirety of the debate, as some coherentists might want to include more in the notion of coherence than mere consistency of credences like what I’ve described here.
Just to be clear, I don’t think the way this would actually work would be by you having two coherent webs of credences revealed to you in full by your galaxy-sized brain, and then picking one. More realistically, you would come to consider some proposition and realize that it seems right, and this would prompt you to adjust your credence in that proposition (as well as related propositions). This might just sound like making a new observation, but I don’t think we should treat seemings like observations, since the evidential value of an observation will depend entirely on the relevant priors, whereas that wouldn’t be the case for seemings—they simply confer justification, unless you have a defeater.
First off, great writing, great content. You've made it so schmationally clear that I needed to subscribe.
I find it not ironic though that your example seeming is actually an illusion. This highlights the fact that seemings are not necessarily coherent with our beliefs or even "reality".
1. Any person can have false seemings.
2. If a person can have false seemings they should be skeptical of their seemings.
3. We should be skeptical of our seemings.
Intuitions certainly play an important role in how we understand the world but I believe they should be considered more like Bayesian prior probabilities, giving us a starting point from which to then construct our instantaneous theory of the moment. A theory which we can test against reality (measuring the lines) and then integrate into our web of beliefs.
There is some irony in that this post did convince me of something. I feel my perspective on certain foundational concepts in philosophy has changed. It feels like an aspect I've never even reflected on is suddenly different, though I couldn't tell you what it was I believed before.
Thank you for sharing. I loved your reduction of inductive arguments to probabilistic deductive arguments, going to borrow that someday.
Enjoyed this!