Tag Archives: informal logic

The presentism fallacy

by Tim Harding

In recent times, there has been a trend in the popular media towards viewing past events and people through the prism of present-day attitudes. This trend is manifested in attempts to ‘cancel’ the past, including by silencing discussions, banning books, tearing down statues and so on.

In historical and literary analysis, presentism is a pejorative term for the introduction of present-day ideas and perspectives into depictions or interpretations of the past. Some modern historians seek to avoid presentism in their work because they consider it a form of cultural bias, and believe it creates a distorted understanding of their subject matter. The practice of presentism is regarded by some as a common informal fallacy when writing about the past.

Presentism also fails to take into account that, at the time in which historical events occurred, those involved did not enjoy the benefit of hindsight that has informed our present perspective. Yet to fully understand an historical event, we must view it not only with the benefit of hindsight, but also in the more limited context of its own times.

To avoid the fallacy of presentism, orthodox historians restrict themselves to describing what happened and why, attempting to refrain from using language that passes judgment. For example, in analysing the history of slavery, it is more useful to study the attitudes and circumstances of the past that led to slavery, rather than just present-day attitudes that simply condemn slavery without further analysis. One theory is that African-American slavery began for economic reasons and that racism was a consequential attempt to justify the practice, rather than being the prime cause of slavery. Such a theory would be overlooked by presentism.

This fallacy should not be confused with philosophical presentism, which is a technical position in ontology (the study of existence) that only the present exists.

2 Comments

Filed under Reblogs

Appeal to Ignorance

 

by Tim Harding

The scope of the Appeal to Ignorance fallacy (Argumentum ad Ignorantiam in Latin) is more limited than its title would suggest. In the specific context of this fallacy, the word ignorance represents ‘a lack of contrary evidence’ rather than a lack of education or knowledge. The fallacy title was likely coined by the philosopher John Locke in the late 17th century.

In informal logic, this fallacy asserts that a proposition is true because it has not yet been shown to be false, or a proposition is false because it has not yet been shown to be true. This represents a type of false dichotomy, in that it excludes the possibility that there may have been an insufficient investigation to determine whether the proposition is either true or false. In other words, ‘absence of evidence is not evidence of absence.’

In rhetorical debates, appeals to ignorance are sometimes used in an attempt to shift the burden of proof.  A typical example is as follows: ‘In spite of all the talk, not a single flying saucer report has been authenticated. We may assume, therefore, there are no such things as flying saucers.’ An absurd but logically equivalent example is: ‘Although NASA has shown that the surface of the moon is not made of green cheese, it has not conclusively demonstrated that the Moon’s core is not made of it; therefore, the moon’s core is made of green cheese.

This fallacy is a potential trap that empiricists need to be wary of falling into. We cannot prove the non-existence of anything, so the burden of proof lies with those who claim the existence of something, rather than those who doubt it. So, we should always remain open to the possibility of new evidence in support of a claim, even if no such evidence has ever been found.

Leave a comment

Filed under Logical fallacies

Argument to moderation

Argument to moderation (Latin: argumentum ad temperantiam) is an informal fallacy which asserts that the truth can be found as a compromise between two opposite positions.  It is also known as the argument from middle ground, false compromise, grey fallacy and the golden mean fallacy.  It is effectively an inverse false dilemma, discarding both of two opposites in favour of a middle position. It is related to, but different from the false balance fallacy.

An individual demonstrating this fallacy implies that the positions being considered represent extremes of a continuum of opinions, that such extremes are always wrong, and the middle ground is always correct.  This is not necessarily the case.

The form of the fallacy goes like this:

Premise: There is a choice to make between doing X or doing Y.

Conclusion: Therefore, the answer is somewhere between X and Y.

This argument is invalid because the conclusion does not logically follow from the premise.  Sometimes only X or Y is right or true, with no middle ground possible.

To give an example of this fallacy:

‘The fact that one is confronted with an individual who strongly argues that slavery is wrong and another who argues equally strongly that slavery is perfectly legitimate in no way suggests that the truth must be somewhere in the middle.’[1]

Another example is:

’You say the sky is blue, while I say the sky is red. Therefore, the best solution is to compromise and agree that the sky is purple.’

This fallacy is sometimes used in rhetorical debates to undermine an opponent’s position.  All one must do is present yet another, radically opposed position, and the middle-ground compromise will be forced closer to that position.  In pragmatic politics, this is part of the basis behind the Overton window theory.

In US politics this fallacy is known as ‘High Broderism’ after David Broder, a columnist and reporter for the Washington Post who insisted, against all reason, that the best policy was always the middle ground between the Republicans and the Democrats.

Related to this fallacy is design by committee, which is a disparaging term used to describe a project that has many designers involved but no unifying plan or vision, often resulting in a negotiated compromise; as illustrated by the aphorism ‘A camel is a horse designed by a committee’. The point is that a negotiated compromise is not necessarily true, right or even the optimal outcome. This does not mean that a negotiated compromise may not be appropriate in some cases.

References

[1] Susan T. Gardner (2009).Thinking Your Way to Freedom: A Guide to Owning Your Own Practical Reasoning. Temple University Press.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

3 Comments

Filed under Logical fallacies

Common sense fallacy

by Tim Harding

The American writer H L Mencken once said “There is always a well-known solution to every human problem — neat, plausible, and wrong.” He was referring to ‘common sense’, which can be superficially plausible and sometimes right, but often wrong.

The Common Sense Fallacy (or ‘Appeal to Common Sense’) is superficially similar to the Argument from Popularity and/or  the Argument from Tradition. However, it differs from these fallacies by not necessarily relying on popularity or tradition.

Instead, common sense relies on the vague notion of ‘obviousness’, which means something like ‘what we perceive from personal experience’ or ‘what we should know without having had to learn.’ In other words, common sense is not necessarily supported by evidence or reasoning. As such, beliefs based on common sense are unreliable.  The fallacy lies in giving too much weight to common sense in drawing conclusions, at the expense of evidence and reasoning.

In some ways, scientific methods have been developed to avoid the errors that can result from common sense. For instance, common sense used to tell us that the Earth is flat and that the Sun revolves around the Earth – because that is the way things appear to us without scientific investigation.  Another example of ‘common sense’ is that the world appears to have been designed, so therefore there must have been a designer.

Einstein’s theories of relativity were initially resisted, even by the scientific community, because they defied common sense.  They seemed to belong more in the realm of science fiction than reality, until they were later verified by scientific observations.  Our modern Global Positioning System (GPS) now uses Einstein’s relativity theories.  This initial resistance may have led Einstein to later say that ”Common sense is nothing more than a deposit of prejudices laid down by the mind before you reach eighteen” .

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

25 Comments

Filed under Logical fallacies

Are you a poor logician? Logically, you might never know

The Conversation

By Stephan Lewandowsky, University of Bristol and Richard Pancost, University of Bristol

This is the second article in a series, How we make decisions, which explores our decision-making processes. How well do we consider all factors involved in a decision, and what helps and what holds us back?


It is an unfortunate paradox: if you’re bad at something, you probably also lack the skills to assess your own performance. And if you don’t know much about a topic, you’re unlikely to be aware of the scope of your own ignorance.

Type in any keyword into a scientific search engine and a staggering number of published articles appears. “Climate change” yields 238,000 hits; “tobacco lung cancer” returns 14,500; and even the largely unloved “Arion ater” has earned a respectable 245 publications.

Experts are keenly aware of the vastness of the knowledge landscape in their fields. Ask any scholar and they will likely acknowledge how little they know relative to what is knowable – a realisation that may date back to Confucius.

Here is the catch: to know how much more there is to know requires knowledge to begin with. If you start without knowledge, you also do not know what you are missing out on.

This paradox gives rise to a famous result in experimental psychology known as the Dunning-Kruger effect. Named after Justin Kruger and David Dunning, it refers to a study they published in 1999. They showed that the more poorly people actually performed, the more they over-estimated their own performance.

People whose logical ability was in the bottom 12% (so that 88 out of 100 people performed better than they did) judged their own performance to be among the top third of the distribution. Conversely, the outstanding logicians who outperformed 86% of their peers judged themselves to be merely in the top quarter (roughly) of the distribution, thereby underestimating their performance.

John Cleese argues that this effect is responsible for not only Hollywood but the actions of some mainstream media.

Ignorance is associated with exaggerated confidence in one’s abilities, whereas experts are unduly tentative about their performance. This basic finding has been replicated numerous times in many different circumstances. There is very little doubt about its status as a fundamental aspect of human behaviour.

Confidence and credibility

Here is the next catch: in the eyes of others, what matters most to judge a person’s credibility is their confidence. Research into the credibility of expert witnesses has identified the expert’s projected confidence as the most important determinant in judged credibility. Nearly half of people’s judgements of credibility can be explained on the basis of how confident the expert appears — more than on the basis of any other variable.

Does this mean that the poorest-performing — and hence most over-confident — expert is believed more than the top performer whose displayed confidence may be a little more tentative? This rather discomforting possibility cannot be ruled out on the basis of existing data.

But even short of this extreme possibility, the data on confidence and expert credibility give rise to another concern. In contested arenas, such as climate change, the Dunning-Kruger effect and its flow-on consequences can distort public perceptions of the true scientific state of affairs.

To illustrate, there is an overwhelming scientific consensus that greenhouse gas emissions from our economic activities are altering the Earth’s climate. This consensus is expressed in more than 95% of the scientific literature and it is shared by a similar fraction — 97-98% – of publishing experts in the area. In the present context, it is relevant that research has found that the “relative climate expertise and scientific prominence” of the few dissenting researchers “are substantially below that of the convinced researchers”.

Guess who, then, would be expected to appear particularly confident when they are invited to expound their views on TV, owing to the media’s failure to recognise (false) balance as (actual) bias? Yes, it’s the contrarian blogger who is paired with a climate expert in “debating” climate science and who thinks that hot brick buildings contribute to global warming.

‘I’m not an expert, but…’

How should actual experts — those who publish in the peer-reviewed literature in their area of expertise — deal with the problems that arise from Dunning-Kruger, the media’s failure to recognise “balance” as bias, and the fact that the public uses projected confidence as a cue for credibility?

Speaker of the US House of Representatives John Boehner admitted earlier this year he wasn’t qualified to comment on climate change.

We suggest two steps based on research findings.

The first focuses on the fact of a pervasive scientific consensus on climate change. As one of us has shown, the public’s perception of that consensus is pivotal in determining their acceptance of the scientific facts.

When people recognise that scientists agree on the climate problem, they too accept the existence of the problem. It is for this reason that Ed Maibach and colleagues, from the Centre for Climate Change Communication at George Mason University, have recently called on climate scientists to set the record straight and inform the public that there is a scientific consensus that human-caused climate change is happening.

One might object that “setting the record straight” constitutes advocacy. We do not agree; sharing knowledge is not advocacy and, by extension, neither is sharing the strong consensus behind that knowledge. In the case of climate change, it simply informs the public of a fact that is widely misrepresented in the media.

The public has a right to know that there is a scientific consensus on climate change. How the public uses that knowledge is up to them. The line to advocacy would be crossed only if scientists articulated specific policy recommendations on the basis of that consensus.

The second step to introducing accurate scientific knowledge into public debates and decision-making pertains precisely to the boundary between scientific advice and advocacy. This is a nuanced issue, but some empirical evidence in a natural-resource management context suggests that the public wants scientists to do more than just analyse data and leave policy decisions to others.

Instead, the public wants scientists to work closely with managers and others to integrate scientific results into management decisions. This opinion appears to be equally shared by all stakeholders, from scientists to managers and interest groups.

Advocacy or understanding?

In a recent article, we wrote that “the only unequivocal tool for minimising climate change uncertainty is to decrease our greenhouse gas emissions”. Does this constitute advocacy, as portrayed by some commenters?

It is not. Our statement is analogous to arguing that “the only unequivocal tool for minimising your risk of lung cancer is to quit smoking”. Both statements are true. Both identify a link between a scientific consensus and a personal or political action.

Neither statement, however, advocates any specific response. After all, a smoker may gladly accept the risk of lung cancer if the enjoyment of tobacco outweighs the spectre of premature death — but the smoker must make an informed decision based on the scientific consensus on tobacco.

Likewise, the global public may decide to continue with business as usual, gladly accepting the risk to their children and grandchildren – but they should do so in full knowledge of the risks that arise from the existing scientific consensus on climate change.

Some scientists do advocate for specific policies, especially if their careers have evolved beyond simply conducting science and if they have taken new or additional roles in policy or leadership.

Most of us, however, carefully limit our statements to scientific evidence. In those cases, it is vital that we challenge spurious accusations of advocacy, because such claims serve to marginalise the voices of experts.

Portraying the simple sharing of scientific knowledge with the public as an act of advocacy has the pernicious effect of silencing scientists or removing their expert opinion from public debate. The consequence is that scientific evidence is lost to the public and is lost to the democratic process.

But in one specific way we are advocates. We advocate that our leaders recognise and understand the evidence.

We believe that sober policy decisions on climate change cannot be made when politicians claim that they are not scientists while also erroneously claiming that there is no scientific consensus.

We advocate that our leaders are morally obligated to make and justify their decisions in light of the best available scientific, social and economic understanding.


Click on the links below for other articles in the series, How we make decisions:

The ConversationStephan Lewandowsky receives funding from the Royal Society, from the World University Network (WUN), and from the ‘Great Western 4’ (GW4) consortium of English universities.

Richard Pancost receives funding from RCUK, the EU and the Leverhulme Trust.

This article was originally published on The Conversation. (Republished with permission). Read the original article.

Leave a comment

Filed under Reblogs

The Red Herring Fallacy

The idiom ‘red herring’ is used to refer to something that misleads or distracts from the relevant or important issue.  The expression is mainly used to assert that an argument is not relevant to the issue being discussed.

A red herring fallacy is an error in logic where a proposition is, or is intended to be, misleading in order to make irrelevant or false inferences. It includes any logical inference based on fake arguments, intended to replace the lack of real arguments or to replace implicitly the subject of the discussion.  In this way, a red herring is as much a debating tactic as it is a logical fallacy.  It is a fallacy of distraction, and is committed when a listener attempts to divert an arguer from his argument by introducing another topic.  Such arguments have the following form:

Topic A is under discussion.

Topic B is introduced under the guise of being relevant to topic A (when topic B is actually not relevant to topic A).

Topic A is abandoned.

This sort of reasoning is fallacious because merely changing the topic of discussion hardly counts as an argument against a claim.

For instance, ‘I’m entitled to my opinion’ or ‘I have a right to my opinion’ is a common declaration in rhetoric or debate that can be made at some point in a discussion. Whether one has a particular entitlement or right is irrelevant to whether one’s opinion is true or false. To assert the existence of the right is a failure to assert any justification for the opinion.

As an informal fallacy, the red herring falls into a broad class of relevance fallacies. Unlike the strawman fallacy, which is premised on a distortion of the other party’s position, the red herring is a seemingly plausible, though ultimately irrelevant, diversionary tactic.  According to the Oxford English Dictionary, a red herring may be intentional or unintentional – it does not necessarily mean a conscious intent to mislead.

Source: Wikimedia Commons

Source: Wikimedia Commons

Conventional wisdom has long supposed the origin of the idiom ‘red herring’ to be the use of a kipper (a strong-smelling smoked fish) to train hounds to follow a scent, or to divert them from the correct route when hunting; however, modern linguistic research suggests that the term was probably invented in 1807 by English polemicist William Cobbett, referring to one occasion on which he had supposedly used a kipper to divert hounds from chasing a hare, and was never an actual practice of hunters.  The phrase was later borrowed to provide a formal name for the logical fallacy and associated literary device.

Although Cobbett most famously mentioned it, he was not the first to consider red herring for scenting hounds; an earlier reference occurs in the pamphlet ‘Nashe’s Lenten Stuffe’, published in 1599 by the Elizabethan writer Thomas Nashe, in which he says ‘Next, to draw on hounds to a scent, to a red herring skin there is nothing comparable’.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

Leave a comment

Filed under Logical fallacies

Boy/girl hair solution

They both lied.

The child with the black hair is the girl, and the child with the white hair is the boy.

(If only one lied they would both be boys or both be girls)

Leave a comment

Filed under Puzzles

Fallacies of composition and division

The Fallacy of Composition arises when one infers that something is true of the whole from the fact that it is true of some part of the whole.  Conversely, the Fallacy of Division occurs when one infers that something true for the whole must also be true of all or some of its parts.  Both fallacies were described by Aristotle in Sophistical Refutations.

Fallacy of composition

The logical form of the Fallacy of Composition is:

     Premise 1: A is part of B

     Premise 2: A has property X

     Conclusion: Therefore, B has property X.

Two examples of this fallacy are:

  • If someone stands up out of his seat at a baseball game, he can see better.  Therefore, if everyone stands up they can all see better.

  • If a runner runs faster, she can win the race.  Therefore if all the runners run faster, they can all win the race.

Athletic competitions are examples of zero-sum games, wherein the winner wins by preventing all other competitors from winning.

Another example of this fallacy is:

Sodium (Na) and Chlorine (Cl) are both dangerous to humans. Therefore any combination of sodium and chlorine, such as common table salt (NaCl) will be dangerous to humans.

This fallacy is often confused with the fallacy of faulty generalisation, in which an unwarranted inference is made from a statement about a sample to a statement about the population from which it is drawn.

In economics, the Paradox of Thrift is a notable fallacy of composition that is central to Keynesian economics.  Division of labour is another economic example, in which overall productivity can greatly increase when individual workers specialize in doing different jobs.

In a Tragedy of the Commons, an individual can profit by consuming a larger share of a common, shared resource such as fish from the sea; but if too many individuals seek to consume more, they can destroy the resource.

In the Free Rider Problem, an individual can benefit by failing to pay when consuming a share of a public good; but if there are too many such ‘free riders’, eventually there will be no ‘ride’ for anyone.

Fallacy of division

The Fallacy of Division is the converse of the Fallacy of Composition.  The logical form of the Fallacy of Division is:

      Premise 1: A is part of B

      Premise 2: B has property X

      Conclusion: Therefore, A has property X.

An example the fallacy of division is:

A Boeing 747 can fly unaided across the ocean.

A Boeing 747 has jet engines.

Therefore, one of its jet engines can fly unaided across the ocean.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button
 

20 Comments

Filed under Logical fallacies

Boy/girl hair puzzle

A boy and a girl are chatting.

“I am a boy”, said the child with black hair.

“I am a girl”, said the child with white hair.

At least one of them lied. What colour hair does the boy have?

Leave a comment

Filed under Puzzles

Post Office Burglary solution

Solution: Derek was the culprit.

Looking at Brian’s statement if it was Charles, then Brian was lying in his first statement, which makes the second statement true. Which would mean that it was both Charles and Alan. So it can’t be Charles.

Which means Derek was lying in his first statement, which makes the second statement true. Therefore it can’t be Alan.

So Eric’s second statement must be false, meaning his first statement was true, therefore it was Derek.

Leave a comment

Filed under Puzzles