When I am bored (or sitting on the toilet (sorry, Filippa)), I sometimes spend my time answering questions on r/AskPhilosophy. One question which pops up surprisingly often is something of the form: “What fallacy is this?”, “What would you name this fallacy?”, “Why is X not a fallacy” etc.1 This sort of question and general attitude to reasoning rubs me the wrong way, and I think it is symptomatic of some things that are wrong with the way many people reason.
Formal and Informal Fallacies
To set out, it is worth making the distinction between formal and informal fallacies. Formal fallacies are common mistakes in deductive reasoning. Affirming the consequent is an example of this, and is basically accepting an argument like this as valid:
If P then Q
Q
Therefore P
This is obviously not a valid argument. I don’t really have a problem with formal fallacies, though I think it is perhaps unnecessary to give names to these instead of just calling it “the not-a-deductively-valid-argument fallacy”, but to each their own. The real problem I have is with informal fallacies. These are a bit more vague, but are basically common ways of reasoning that are generally bad. Some examples of these are ad hominem, appeal to authority and the sunk cost fallacy.
So What’s so Bad about Informal Fallacies?
Well, my problem with them is basically that they limit thinking by putting reasoning in boxes. To illustrate this point, I would like to take a perhaps uncharitably picked example. In a, for him embarrassing video, philosopher/apologist William Lane Craig is asked to respond to something like the argument from disagreement. He responds by saying that the argument commits the genetic fallacy: “Trying to invalidate a position by showing how a person came to hold it”. But this misses the point! The reason that religious disagreement is considered a problem is not because people think that the truth of a view depends on how people come to hold it, rather the problem is that if God existed we would expect that he would want people to have a relationship with him, involving true beliefs about his nature. But the fact that there is widespread disagreement about religion and the nature of God then counts against God’s existence. This nuance, and thus the argument which is actually being made, is completely glossed over when the pattern-recognizing primate brain sees a sentence which resembles something it has once read in “Uncritical Thinking 101”, under the section “fallacies”. (Just for the record, I don't think WLC is stupid, though in certain contexts (like short-form apologetics content) he ends up saying stuff which is less than fortunate.)
I think this is symptomatic of the broader problems with fallacy-based reasoning: Fallacies are essentially occasionally useful shorthands for actual reasoning. For example, the ad hominem fallacy correctly points out that the truth of a statement is independent of who says it (ignoring indexical statements). But that doesn't mean that we can't be more or less justified in believing a statement based on who says it. For example if the CEO of a tobacco-company says “smoking doesn't cause lung cancer”, you should be a lot less confident that what they say is true than if an independent medical-researcher says the same thing, even though that would technically be commiting the ad hominem fallacy. The reason is that there are relevant factors in this case (namely incentives and expertise), which affect the reliability of the people saying the statements, and thus affect the credence you should have in the statement depending on who says it. Blindly applying fallacies would completely miss this (though I doubt any real person would miss it in this case).
Reasoning using fallacies is basically like learning to speak Chinese through one of those CD’s where it says a sentence in English and then repeats it in Chinese: You learn to imitate the language, but you don't actually learn the language. Similarly, when you reason by using fallacies, you don't actually learn how to think, you just memorize a list of labels, without assessing whether everything which may fit under a given label is actually bad reasoning, or whether all examples of bad reasoning can neatly be labeled.
I think the questions I mentioned at the beginning of this post are the worst symptom I have yet to see of this disease affecting the minds of internet-reasoners. The fact that you feel the need to find boxes into which you can neatly put any argument which is unpersuasive, to the point where you cannot conceive of a bad argument which doesn't have a name on the Wikipedia entry of fallacies, just betrays how much people are relying on shorthands and shortcuts in reasoning instead of actually thinking stuff through (though my above-average-IQ readers would surely not do such a thing).
To put a bow on this post, I thought it would be fun to look at a couple supposed fallacies and see why they don’t work. I have already touched on the ad hominem fallacy, so I will take some others here:
Argument from Ignorance
The “fallacy” of an argument from ignorance is that of believing something is true due to a lack of evidence against it. It can be summed up in the aphorism “absence of evidence isn't evidence of absence”, often attributed to Carl Sagan (the sage of lazy epistemology). There is just one problem with this: It is demonstrably false. From Bayes’ Theorem, you can prove that for B to be evidence of A, not-B must be evidence against A, assuming evidence for A is simply something which makes A more likely.2 Now take A to be “Jones killed Jill” and B to be “Jones’ fingerprints have been observed on the murder weapon”. Since B is clearly evidence for A, ~B (absence of evidence) must be evidence against A (that is, evidence of absence). So absence of evidence literally just is evidence of absence - and it always is. Now it is not necessarily going to be enough evidence to make us think that Jones is innocent. Perhaps we found his dna on the murder weapon instead, giving very strong evidence overpowering the small amount of evidence from not finding fingerprints. But if we found no evidence at all for Jones having committed the crime, then we are in fact justified in believing that he did not commit the crime. So that is at least one case where we are justified in believing something (that Jones did not kill Jill) simply due to a lack of evidence to the contrary. There are of course many cases where one would not be justified in believing something, merely due to a lack of evidence against it, for example that a teapot is orbiting Jupiter or that you are not rationally required to subscribe to Wonder and Aporia:
So what distinguishes cases where you are justified in accepting an argument from ignorance from this where you aren’t? Well, it basically just comes down to the prior probability of the proposition in question. The prior probability of Jones not having killed Jill is quite high (there are billions of possible perpetrators after all), and so you are justified in believing it in the absence of evidence to the contrary. Likewise, the prior probability of a teapot orbiting Jupiter is very low, meaning you are not justified in believing it in the absence of evidence. This still holds if you are a subjective bayesian - you will simply think that there is no objective fact of the matter as to whether someone is justified in accepting an argument from ignorance - it is just up to how they choose to set their priors.
Argumentum ad Populum
Another common “fallacy” is the argumentum ad populum or bandwagon fallacy, which is the fallacy of believing something is good or true simply because a lot of people think so. The problem is again that this is very often not fallacious. For one, we can use the trick used above: If almost no-one believed that vaccines were effective, for example, that should surely make me less confident that they were. But that means that the fact that many people believe them to be so necessarily must provide at least some evidence that they are. In fact, this appears to be very much evidence. I have never overseen a medical trial for vaccines. In fact I don't think I have ever spoken directly with someone who has. All I have to go off is that a lot of people say that they are effective. And yet I am very confident that vaccines are effective, and I think that I am justified in this. The reason is that testimony can provide very good evidence - especially when it is the testimony of large groups. Other beliefs I (and probably you) hold quite strongly due to the testimony of large groups are: That star constellations are different in the southern hemisphere, that durians smell very bad, that the Empire State Building is 381m tall (or 4.16 football fields for the americans), etc. Are all these beliefs fallacious? Surely not! I can of course get defeaters for these beliefs. Perhaps it turns out that all measurement devices used to measure the Empire State Building have been calibrated wrongly. Or maybe it turns out that all the people who believe something have their belief from a single source which I find out is unreliable. Or I might look at all or most of the arguments used to justify a commonly held belief and find out that they are very bad. But these are special circumstances, and absent defeaters I certainly have strong reason to believe such commonly held beliefs. Calling this a fallacy seems wrongheaded then.
Continuum Fallacy
Some fallacies also just assume controversial philosophical positions. The continuum fallacy states that it is fallacious to reject a claim or to deny a relevant distinction between two states because there exists a continuum of intermediate possibilities. Alright, pack it up boys! You can stop arguing about nihilism about vagueness - Wikipedia says it’s a fallacy! What? No! You can't just settle a philosophical dispute by putting a label on one of the positions and calling it a fallacy - that is not how you reason well.
So What Can We Learn from All This?
I think most fallacies can come out trivially true if interpreted extremely. Of course you can't be certain that something is true because you have not observed evidence to the contrary, and you can't be certain that something is true because many people say it is. But you also can't be certain that there is a screen in front of you (perhaps you are hallucinating) or that the moon isn't made of cheese (have you tasted it?). So if that is how we decide whether something is fallacious, then any beliefs you have about any matters of fact (as Hume would say), perhaps except that there is thought happening, are fallacious. So that would be a strange way to characterize fallacies. But you are often sufficiently justified to believe something based on “fallacious” reasoning.
Or perhaps I am misunderstanding the purpose of fallacies? They are not supposed to be universally applicable, but only applicable in certain cases. But if that is the case, then it is just no longer clear to me what the purpose is. Surely fallacies are supposed to be classifications for bad reasoning, but if the stuff that falls under a category of bad reasoning is not bad reasoning, then that category is certainly not useful anymore. Are we poor reasoners supposed to actually think about arguments, even after we have found a fallacy which we can label them with?! Then why not just think, and not appeal to fallacies at all? There simply seems to be no purpose served by using fallacies - perhaps other than being able to put some rhetorical flair in your dismissal of an argument, and making you worse at thinking for yourself.
I do think that there are some informal fallacies that make sense to use. For example, an argument which begs the question will always be logically valid (the premises cannot be true without the conclusion being true, since the conclusion is one of the premises), but an argument which begs the question is also never persuasive, and so it makes sense to have “begging the question” as a fallacy. Likewise, I also think that equivocation is always bad, since using a word in different senses in different parts of an argument does not make for a good argument. Though I think it may be a mistake to classify this as an informal fallacy anyways.3
There are also just some fallacies, such as the straw man fallacy and the red herring fallacy, which have nothing to do with proper reasoning, and more to do with how to conduct yourself in a discussion. So it wouldn't really make sense to critique them on their epistemological merits (though it also doesn't make sense to count them as fallacies, given how “fallacy” is usually defined).
So it looks as though on pretty much any reasonable interpretation of what a fallacy is, it just doesn't make sense to use them for reasoning. Instead you can just actually think, without coming up with cool-sounding names for vague, useless groups of supposedly bad methods for reasoning. I will add that my subscribers, and really most of the people reading this, probably don't need to hear this. But judging from the types of questions mentioned at the beginning, someone sure does; I hope some of those people end up finding this somehow.
To see this phenomenon, just search “fallacy” on the subreddit.
We start with Bayes’ Theorem: P(A|B)=P(A)*P(B|A)/P(B)
B is evidence for A IFF P(A|B)>P(A) (definition). This is the case IFF P(B|A)/P(B)>1, which, in turn, is the case IFF P(B|A)>P(B). For the same reason, ¬B is evidence against A IFF P(¬B|A)<P(¬B).
Now, it must be the case that P(¬B)=1-P(B). Likewise, it must hold that P(¬B|A)=1-P(B|A), since B either must obtain or not obtain, exhausting the probability space.
Since P(B|A)>P(B), it must be the case that 1-P(B|A)<1-P(B). But now we can just substitute, getting P(¬B|A)<P(¬B), which of course implies that ¬B is evidence against A, IFF B is evidence for A. (QED or something)
After all, arguments deal in propositions, not in english-language symbols. So if we take the argument:
All feathers are light
No light is dark
Therefore no feather is dark
It should actually be formalized as:
∀x(P(x)→Q(x))
¬∃x(R(x)∧S(x))
∴¬Ex(P(x)∧S(x))
But this argument is just invalid. While “all feathers are light” (in the sense of weight) and “all feathers are light” (in the visual sense) both use the same symbols, they do not express the same proposition, and so the argument:
All feathers are light (first sense)
Therefore all feathers are light (second sense)
Appears to be valid if just looking at the English symbols. But it is actually invalid, if translated to propsitions:
P
∴Q
So equivicating just makes arguments invalid, meaning it is a formal fallacy.
I might be mistaken here, but my understanding of informal logical fallacies is that they're useful in judging one's own reasoning to be sound (should one actually care about such things), and not intended to derive a conclusion of the particular argument's falsehood. The position held can be true despite faulty reasoning.
As someone who has taught this stuff many times, the points you are making here are well taken (I particularly liked the discussion of how absence of evidence is evidence of absence). But I think you are overlooking some considerations that make learning and using fallacies useful. It has to do with, as you said, pattern recognition, and also with rhetoric.
I explain fallacies to my students as arguments that can be rhetorically persuasive despite being weak (in the technical, inductive sense). An argument can be persuasive disproportionate to what its effect on your credence rationally should be. Fallacies are important to know because there are certain patterns in the way people who don’t actually have evidence on their side or who don’t know what they are talking can be effective rhetorically. I teach my students that almost every kind of fallacy has a structure that can in some cases be strong, but they are often used despite being weak.
Take ad hominem. Sure, there are times when knowing a person’s character can be important to evaluating their argument. But often it is just very minimally relevant, and it is generally much mire useful to pay attention to what they are saying and judge it on the merits. And once you can recognise the extent to which, say, the average political opinion column is composed of a lengthy string of snide dismissals and insults of the opposing leader, substituting that for any real consideration of what they really stand for, you have learned something useful.