One of the central assumptions of mainstream economics has been that people make rational choices. As a challenge to this assumption, Nobel prize winning behavioural economist Prof. Daniel Kahneman gives an example where some Americans were offered a choice of insurance against their own death in a terrorist attack while on a trip to Europe, or insurance that would cover death of any kind on the trip. People were willing to pay more for the former insurance, even though ‘death of any kind’ includes ‘death in a terrorist attack’.
This is an instance of the Conjunction Fallacy, which is based on the false assumption that specific conditions are more probable than general ones. This fallacy usually stems from thinking the choices are alternatives, rather than members of the same set.
The logical form of this fallacy is:
Premise: X is a subset of Y.
Conclusion: Therefore, X is more probable than Y.
The probability of a conjunction is never greater than the probability of its conjuncts. In other words, the probability of two things being true can never be greater than the probability of one of them being true, since in order for both to be true, each must be true. However, when people are asked to compare the probabilities of a conjunction and one of its conjuncts, they sometimes judge that the conjunction is more likely than one of its conjuncts. This seems to happen when the conjunction suggests a scenario that is more easily imagined than the conjunct alone.
Interestingly, Kahneman discovered in earlier experiments that statistical sophistication made little difference in the rates at which people committed the conjunction fallacy. This suggests that it is not enough to teach probability theory alone, but that people need to learn directly about the conjunction fallacy in order to counteract the strong psychological effect of imaginability.
Most of us would like to think scientific debate does not operate like the comments section of online news articles. These are frequently characterised by inflexibility, truculence and expostulation. Scientists are generally a little more civil, but sometimes not much so!
There is a more fundamental issue here than politeness, though. Science has a reputation as an arbiter of fact above and beyond just personal opinion or bias. The term “scientific method” suggests there exists an agreed upon procedure for processing evidence which, while not infallible, is at least impartial.
So when even the most respected scientists can arrive at different deeply held convictions when presented with the same evidence, it undermines the perceived impartiality of the scientific method. It demonstrates that science involves an element of subjective or personal judgement.
Yet personal judgements are not mere occasional intruders on science, they are a necessary part of almost every step of reasoning about evidence.
Among the judgements scientists make on a daily basis are: what evidence is relevant to a particular question; what answers are admissible a priori; which answer does the evidence support; what standard of evidence is required (since “extraordinary claims require extraordinary evidence”); and is the evidence sufficient to justify belief?
Another judgement scientists make is whether the predictions of a model are sufficiently reliable to justify committing resources to a course of action.
We do not have universally agreed procedures for making any of these judgements. This should come as no surprise. Evidence is something experienced by persons, and a person is thus essential to relating evidence to the abstractions of a scientific theory.
This is true regardless of how directly the objects of a theory are experienced – whether we observe a bird in flight or its shadow on the ground – ultimately it is the unique neuronal configurations of an individual brain that determine how what we perceive influences what we believe.
Induction, falsification and probability
Nevertheless, we can ask: are there forms of reasoning about evidence that do not depend on personal judgement?
Induction is the act of generalising from particulars. It interprets a pattern observed in specific data in terms of a law governing a wider scope.
But induction, like any form of reasoning about evidence, demands personal judgement. Patterns observed in data invariably admit multiple alternative generalisations. And which generalisation is appropriate, if any, may come down to taste.
Many of the points of contention between Richard Dawkins and the late Stephen Jay Gould can be seen in this light. For example, Gould thought Dawkins too eager to attribute evolved traits to the action of natural selection in cases where contingent survival provides an alternative, and (to Gould) preferable, explanation.
One important statement of the problem of induction was made by 18th-century philosopher David Hume. He noted the only available justification for inductive reasoning is that it works well in practice. But this itself is an inductive argument, and thus “taking that for granted, which is the very point in question”.
Karl Popper wanted science to be based on the deductive reasoning of falsificationism rather than the inductive reasoning of verificationism. Lucinda Douglas-Menzies/Wikimedia
Hume thought we had to accept this circularity, but philosopher of science Karl Popperrejected induction entirely. Popper argued that evidence can only falsify a theory, never verify it. Scientific theories are thus only ever working hypotheses that have withstood attempts at falsification.
This characterisation of science has not prevailed, mainly because science has not historically proceeded in this manner, nor does it today. Thomas Kuhnobserved that:
No process yet disclosed by the historical study of scientific development at all resembles the methodological stereotype of falsification by direct comparison with nature.
Scientists cherish their theories, having invested so much of their personal resources in them. So when a seemingly contradictory datum emerges, they are inclined to make minor adjustments rather than reject core tenets. As physicist Max Planckobserved (before Popper or Kuhn):
A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it.
Falsification also ignores the relationship between science and engineering. Technology stakes human lives and personal resources on the reliability of scientific theories. We could not do this without strong belief in their adequacy. Engineers thus demand more from science than a working hypothesis.
Some philosophers of science look to probabilistic reasoning to place science above personal judgement. Prominent proponents of such approaches include Elliot Sober and Edwin Thompson Jaynes. By these accounts one can compare competing scientific theories in terms of the likelihood of observed evidence under each.
However, probabilistic reasoning does not remove personal judgement from science. Rather, it channels it into the design of models. A model, in this sense, is a mathematical representation of the probabilistic relationships between theory and evidence.
As someone who designs such models for a living, I can tell you the process relies heavily on personal judgement. There are no universally applicable procedures for model construction. Consequently, the point at issue in scientific controversies may be precisely how to model the relationship between theory and evidence.
What is (and isn’t) special about science
Does acknowledging the role played by personal judgement erode our confidence in science as a special means of acquiring knowledge? It does, if what we thought was special about science is that it removes the personal element from the search for truth.
As scientists – or as defenders of science – we must guard against the desire to dominate our interlocutors by ascribing to science a higher authority than it plausibly possesses. Many of us have experienced the frustration of seeing science ignored or distorted in arguments about climate change or vaccinations to name just two.
But we do science no favours by misrepresenting its claim to authority; instead we create a monster. A misplaced faith in science can and has been used as a political weapon to manipulate populations and impose ideologies.
Instead we need to explain science in terms that non-scientists can understand, so that factors that have influenced our judgements can influence theirs.
It is appropriate that non-scientists subordinate their judgements to that of experts, but this deference must be earned. The reputation of an individual scientist for integrity and quality of research is thus crucial in public discussions of science.
I believe science is special, and deserves the role of arbiter that society accords it. But its specialness does not derive from a unique mode of reasoning.
Rather it is the minutiae of science that make it special: the collection of lab protocols, recording practices, publication and peer review standards and many others. These have evolved over centuries under constant pressure to produce useful and reliable knowledge.
Thus, by a kind of natural selection, science has acquired a remarkable capacity to reveal truth. Science continues to evolve, so that what is special about science today might not be what will be special about it tomorrow.
So how much faith should you put in the conclusions of scientists? Judge for yourself!
Players initially have a 2/3 chance of picking a goat. Those who swap always get the opposite of their original choice, so those who swap have 2/3 chance of winning the car. Players who stick have a 1/3 chance of winning the car. The solution is based on the premise that the host knows which door hides the car and intentionally reveals a goat. If the player selected the door hiding the car (1/3), then both remaining doors hide goats and the host may choose either door at random, and switching doors loses in 1/3. On the other hand, if the player initially selected a door that hides a goat (a 2-in-3 chance), then the host’s choice is no longer at random, as he is forced to show the second goat only, and switching doors wins for sure in 2/3.
Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?
The word ‘logic‘ is not easy to define, because it has slightly different meanings in various applications ranging from philosophy, to mathematics to computer science. In philosophy, logic’s main concern is with the validity or cogency of arguments. The essential difference between informal logic and formal logic is that informal logic uses natural language, whereas formal logic (also known as symbolic logic) is more complex and uses mathematical symbols to overcome the frequent ambiguity or imprecision of natural language.
So what is an argument? In everyday life, we use the word ‘argument’ to mean a verbal dispute or disagreement (which is actually a clash between two or more arguments put forward by different people). This is not the way this word is usually used in philosophical logic, where arguments are those statements a person makes in the attempt to convince someone of something, or present reasons for accepting a given conclusion. In this sense, an argument consist of statements or propositions, called its premises, from which a conclusion is claimed to follow (in the case of a deductive argument) or be inferred (in the case of an inductive argument). Deductive conclusions usually begin with a word like ‘therefore’, ‘thus’, ‘so’ or ‘it follows that’.
A good argument is one that has two virtues: good form and all true premises. Arguments can be either deductive, inductive or abductive. A deductive argument with valid form and true premises is said to be sound. An inductive argument based on strong evidence is said to be cogent. The term ‘good argument’ covers all three of these types of arguments.
A valid argument is a deductive argument where the conclusion necessarily follows from the premises, because of the logical structure of the argument. That is, if the premises are true, then the conclusion must also be true. Conversely, an invalid argument is one where the conclusion does not logically follow from the premises. However, the validity or invalidity of arguments must be clearly distinguished from the truth or falsity of its premises. It is possible for the conclusion of a valid argument to be true, even though one or more of its premises are false. For example, consider the following argument:
Premise 1: Napoleon was German
Premise 2: All Germans are Europeans
Conclusion: Therefore, Napoleon was European
The conclusion that Napoleon was European is true, even though Premise 1 is false. This argument is valid because of its logical structure, not because its premises and conclusion are all true (which they are not). Even if the premises and conclusion were all true, it wouldn’t necessarily mean that the argument was valid. If an argument has true premises and its form is valid, then its conclusion must be true.
Deductive logic is essentially about consistency.The rules of logic are not arbitrary, like the rules for a game of chess. They exist to avoid internal contradictions within an argument. For example, if we have an argument with the following premises:
Premise 1: Napoleon was either German or French
Premise 2: Napoleon was not German
The conclusion cannot logically be “Therefore, Napoleon was German” because that would directly contradict Premise 2. So the logical conclusion can only be: “Therefore, Napoleon was French”, not because we know that it happens to be true, but because it is the only possible conclusion if both the premises are true. This is admittedly a simple and self-evident example, but similar reasoning applies to more complex arguments where the rules of logic are not so self-evident. In summary, the rules of logic exist because breaking the rules would entail internal contradictions within the argument.
An inductive argument is one wherethe premises seek to supply strong evidence for (not absolute proof of) the truth of the conclusion. While the conclusion of a sound deductive argument is supposed to be certain, the conclusion of a cogent inductive argument is supposed to be probable, based upon the evidence given. An example of an inductive argument is:
Premise 1: Almost all people are taller than 26 inches Premise 2: George is a person Conclusion: Therefore, George is almost certainly taller than 26 inches
Whilst an inductive argument based on strong evidence can be cogent, there is some dispute amongst philosophers as to the reliability of induction as a scientific method. For example, by the problem of induction, no number of confirming observations can verify a universal generalization, such as ‘All swans are white’, yet it is logically possible to falsify it by observing a single black swan.
Abduction may be described as an “inference to the best explanation”, and whilst not as reliable as deduction or induction, it can still be a useful form of reasoning. For example, a typical abductive reasoning process used by doctors in diagnosis might be: “this set of symptoms could be caused by illnesses X, Y or Z. If I ask some more questions or conduct some tests I can rule out X and Y, so it must be Z.
Incidentally, the doctor is the one who is doing the abduction here, not the patient. By accepting the doctor’s diagnosis, the patient is using inductive reasoning that the doctor has a sufficiently high probability of being right that it is rational to accept the diagnosis. This is actually an acceptable form of the Argument from Authority (only the deductive form is fallacious).
Hodges, W. (1977) Logic – an introduction to elementary logic (2nd ed. 2001) Penguin, London.
Lemmon, E.J. (1987) Beginning Logic. Hackett Publishing Company, Indianapolis.
If you find the information on this blog useful, you might like to consider supporting us.
The Argument from Authority is often misunderstood to be a fallacy in all cases, when this is not necessarily so. The argument becomes a fallacy only when used deductively, or where there is insufficient inductive strength to support the conclusion of the argument.
The most general form of the deductive fallacy is:
Premise 1: Source A says that statement p is true. Premise 2: Source A is authoritative. Conclusion: Therefore, statement p is true.
Even when the source is authoritative, this argument is still deductively invalid because the premises can be true, and the conclusion false (i.e. an authoritative claim can turn out to be false). This fallacy is known as ‘Appeal to Authority’.
Although reliable authorities are correct in judgments related to their area of expertise more often than laypersons, they can occasionally come to the wrong judgments through error, bias or dishonesty. Thus, the argument from authority is at best a probabilistic inductive argument rather than a deductive argument for establishing facts with certainty. Nevertheless, the probability sometimes can be very high – enough to qualify as a convincing cogent argument. For example, astrophysicists tell us that black holes exist. The rest of us are in no position to either verify or refute this claim. It is rational to accept the claim as being true, unless and until the claim is shown to be false by future astrophysicists (the first of whom would probably win a Nobel Prize for doing so). An alternative explanation that astrophysicists are engaged in a worldwide conspiracy to deceive us all would be implausible and irrational.
“…if an overwhelming majority of experts say something is true, then any sensible non-expert should assume that they are probably right.” 
Thus there is no fallacy entailed in arguing that the advice of an expert in his or her field should be accepted as true, at least for the time being, unless and until it is effectively refuted. A fallacy only arises when it is claimed or implied that the expert is infallible and that therefore his or her advice must be true as a deductive argument, rather than as a matter of probability. Criticisms of cogent arguments from authority can actually be a rejection of expertise, which is a fallacy of its own.
The Argument from Authority is sometimes mistakenly confused with the citation of references, when done to provide published evidence in support of the point the advocate is trying to make. In these cases, the advocate is not just appealing to the authority of the author, but providing the source of evidence so that readers can check the evidence themselves if they wish. Such citations of evidence are not only acceptable reasoning, but are necessary to avoid plagiarism.
Expert opinion can also constitute evidence and is often accepted as such by the courts. For example, if you describe your symptoms to your doctor and he or she provides an opinion that you have a certain illness, that opinion is evidence that you have that illness. It is not necessary for your doctor to cite references when giving you his or her expert opinion, let alone convince you with a cogent argument. In some cases, expert opinion can carry sufficient inductive strength on its own.
 If the premises can be true, but the conclusion can be false, then the argument is logically invalid.