Tag Archives: informal logic

Argument to moderation

Argument to moderation (Latin: argumentum ad temperantiam) is an informal fallacy which asserts that the truth can be found as a compromise between two opposite positions.  It is also known as the argument from middle ground, false compromise, grey fallacy and the golden mean fallacy.  It is effectively an inverse false dilemma, discarding both of two opposites in favour of a middle position. It is related to, but different from the false balance fallacy.

An individual demonstrating this fallacy implies that the positions being considered represent extremes of a continuum of opinions, that such extremes are always wrong, and the middle ground is always correct.  This is not necessarily the case.

The form of the fallacy goes like this:

Premise: There is a choice to make between doing X or doing Y.

Conclusion: Therefore, the answer is somewhere between X and Y.

This argument is invalid because the conclusion does not logically follow from the premise.  Sometimes only X or Y is right or true, with no middle ground possible.

To give an example of this fallacy:

‘The fact that one is confronted with an individual who strongly argues that slavery is wrong and another who argues equally strongly that slavery is perfectly legitimate in no way suggests that the truth must be somewhere in the middle.’[1]

Another example is:

’You say the sky is blue, while I say the sky is red. Therefore, the best solution is to compromise and agree that the sky is purple.’

This fallacy is sometimes used in rhetorical debates to undermine an opponent’s position.  All one must do is present yet another, radically opposed position, and the middle-ground compromise will be forced closer to that position.  In pragmatic politics, this is part of the basis behind the Overton window theory.

In US politics this fallacy is known as ‘High Broderism’ after David Broder, a columnist and reporter for the Washington Post who insisted, against all reason, that the best policy was always the middle ground between the Republicans and the Democrats.

Related to this fallacy is design by committee, which is a disparaging term used to describe a project that has many designers involved but no unifying plan or vision, often resulting in a negotiated compromise; as illustrated by the aphorism ‘A camel is a horse designed by a committee’. The point is that a negotiated compromise is not necessarily true, right or even the optimal outcome. This does not mean that a negotiated compromise may not be appropriate in some cases.

References

[1] Susan T. Gardner (2009).Thinking Your Way to Freedom: A Guide to Owning Your Own Practical Reasoning. Temple University Press.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

1 Comment

Filed under Logical fallacies

Common sense fallacy

by Tim Harding

The American writer H L Mencken once said “There is always a well-known solution to every human problem — neat, plausible, and wrong.” He was referring to ‘common sense’, which can be superficially plausible and sometimes right, but often wrong.

The Common Sense Fallacy (or ‘Appeal to Common Sense’) is somewhat related to the Argument from Popularity and/or  the Argument from Tradition. However, it differs from these fallacies by not necessarily relying on popularity or tradition.

Instead, common sense relies on the vague notion of ‘obviousness’, which means something like ‘what we perceive from personal experience’ or ‘what we should know without having had to learn.’ In other words, common sense is not necessarily supported by evidence or reasoning. As such, beliefs based on common sense are unreliable.  The fallacy lies in giving too much weight to common sense in drawing conclusions, at the expense of evidence and reasoning.

In some ways, scientific methods have been developed to avoid the errors that can result from common sense. For instance, common sense used to tell us that the Earth is flat and that the Sun revolves around the Earth – because that is the way things appear to us without scientific investigation.  Another example of ‘common sense’ is that the world appears to have been designed, so therefore there must have been a designer.

Einstein’s theories of relativity were initially resisted, even by the scientific community, because they defied common sense.  They seemed to belong more in the realm of science fiction than reality, until they were later verified by scientific observations.  Our modern Global Positioning System (GPS) now uses Einstein’s relativity theories.  This initial resistance may have led Einstein to later say that ”Common sense is nothing more than a deposit of prejudices laid down by the mind before you reach eighteen” .

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

4 Comments

Filed under Logical fallacies

Are you a poor logician? Logically, you might never know

The Conversation

By Stephan Lewandowsky, University of Bristol and Richard Pancost, University of Bristol

This is the second article in a series, How we make decisions, which explores our decision-making processes. How well do we consider all factors involved in a decision, and what helps and what holds us back?


It is an unfortunate paradox: if you’re bad at something, you probably also lack the skills to assess your own performance. And if you don’t know much about a topic, you’re unlikely to be aware of the scope of your own ignorance.

Type in any keyword into a scientific search engine and a staggering number of published articles appears. “Climate change” yields 238,000 hits; “tobacco lung cancer” returns 14,500; and even the largely unloved “Arion ater” has earned a respectable 245 publications.

Experts are keenly aware of the vastness of the knowledge landscape in their fields. Ask any scholar and they will likely acknowledge how little they know relative to what is knowable – a realisation that may date back to Confucius.

Here is the catch: to know how much more there is to know requires knowledge to begin with. If you start without knowledge, you also do not know what you are missing out on.

This paradox gives rise to a famous result in experimental psychology known as the Dunning-Kruger effect. Named after Justin Kruger and David Dunning, it refers to a study they published in 1999. They showed that the more poorly people actually performed, the more they over-estimated their own performance.

People whose logical ability was in the bottom 12% (so that 88 out of 100 people performed better than they did) judged their own performance to be among the top third of the distribution. Conversely, the outstanding logicians who outperformed 86% of their peers judged themselves to be merely in the top quarter (roughly) of the distribution, thereby underestimating their performance.

John Cleese argues that this effect is responsible for not only Hollywood but the actions of some mainstream media.

Ignorance is associated with exaggerated confidence in one’s abilities, whereas experts are unduly tentative about their performance. This basic finding has been replicated numerous times in many different circumstances. There is very little doubt about its status as a fundamental aspect of human behaviour.

Confidence and credibility

Here is the next catch: in the eyes of others, what matters most to judge a person’s credibility is their confidence. Research into the credibility of expert witnesses has identified the expert’s projected confidence as the most important determinant in judged credibility. Nearly half of people’s judgements of credibility can be explained on the basis of how confident the expert appears — more than on the basis of any other variable.

Does this mean that the poorest-performing — and hence most over-confident — expert is believed more than the top performer whose displayed confidence may be a little more tentative? This rather discomforting possibility cannot be ruled out on the basis of existing data.

But even short of this extreme possibility, the data on confidence and expert credibility give rise to another concern. In contested arenas, such as climate change, the Dunning-Kruger effect and its flow-on consequences can distort public perceptions of the true scientific state of affairs.

To illustrate, there is an overwhelming scientific consensus that greenhouse gas emissions from our economic activities are altering the Earth’s climate. This consensus is expressed in more than 95% of the scientific literature and it is shared by a similar fraction — 97-98% – of publishing experts in the area. In the present context, it is relevant that research has found that the “relative climate expertise and scientific prominence” of the few dissenting researchers “are substantially below that of the convinced researchers”.

Guess who, then, would be expected to appear particularly confident when they are invited to expound their views on TV, owing to the media’s failure to recognise (false) balance as (actual) bias? Yes, it’s the contrarian blogger who is paired with a climate expert in “debating” climate science and who thinks that hot brick buildings contribute to global warming.

‘I’m not an expert, but…’

How should actual experts — those who publish in the peer-reviewed literature in their area of expertise — deal with the problems that arise from Dunning-Kruger, the media’s failure to recognise “balance” as bias, and the fact that the public uses projected confidence as a cue for credibility?

Speaker of the US House of Representatives John Boehner admitted earlier this year he wasn’t qualified to comment on climate change.

We suggest two steps based on research findings.

The first focuses on the fact of a pervasive scientific consensus on climate change. As one of us has shown, the public’s perception of that consensus is pivotal in determining their acceptance of the scientific facts.

When people recognise that scientists agree on the climate problem, they too accept the existence of the problem. It is for this reason that Ed Maibach and colleagues, from the Centre for Climate Change Communication at George Mason University, have recently called on climate scientists to set the record straight and inform the public that there is a scientific consensus that human-caused climate change is happening.

One might object that “setting the record straight” constitutes advocacy. We do not agree; sharing knowledge is not advocacy and, by extension, neither is sharing the strong consensus behind that knowledge. In the case of climate change, it simply informs the public of a fact that is widely misrepresented in the media.

The public has a right to know that there is a scientific consensus on climate change. How the public uses that knowledge is up to them. The line to advocacy would be crossed only if scientists articulated specific policy recommendations on the basis of that consensus.

The second step to introducing accurate scientific knowledge into public debates and decision-making pertains precisely to the boundary between scientific advice and advocacy. This is a nuanced issue, but some empirical evidence in a natural-resource management context suggests that the public wants scientists to do more than just analyse data and leave policy decisions to others.

Instead, the public wants scientists to work closely with managers and others to integrate scientific results into management decisions. This opinion appears to be equally shared by all stakeholders, from scientists to managers and interest groups.

Advocacy or understanding?

In a recent article, we wrote that “the only unequivocal tool for minimising climate change uncertainty is to decrease our greenhouse gas emissions”. Does this constitute advocacy, as portrayed by some commenters?

It is not. Our statement is analogous to arguing that “the only unequivocal tool for minimising your risk of lung cancer is to quit smoking”. Both statements are true. Both identify a link between a scientific consensus and a personal or political action.

Neither statement, however, advocates any specific response. After all, a smoker may gladly accept the risk of lung cancer if the enjoyment of tobacco outweighs the spectre of premature death — but the smoker must make an informed decision based on the scientific consensus on tobacco.

Likewise, the global public may decide to continue with business as usual, gladly accepting the risk to their children and grandchildren – but they should do so in full knowledge of the risks that arise from the existing scientific consensus on climate change.

Some scientists do advocate for specific policies, especially if their careers have evolved beyond simply conducting science and if they have taken new or additional roles in policy or leadership.

Most of us, however, carefully limit our statements to scientific evidence. In those cases, it is vital that we challenge spurious accusations of advocacy, because such claims serve to marginalise the voices of experts.

Portraying the simple sharing of scientific knowledge with the public as an act of advocacy has the pernicious effect of silencing scientists or removing their expert opinion from public debate. The consequence is that scientific evidence is lost to the public and is lost to the democratic process.

But in one specific way we are advocates. We advocate that our leaders recognise and understand the evidence.

We believe that sober policy decisions on climate change cannot be made when politicians claim that they are not scientists while also erroneously claiming that there is no scientific consensus.

We advocate that our leaders are morally obligated to make and justify their decisions in light of the best available scientific, social and economic understanding.


Click on the links below for other articles in the series, How we make decisions:

The ConversationStephan Lewandowsky receives funding from the Royal Society, from the World University Network (WUN), and from the ‘Great Western 4’ (GW4) consortium of English universities.

Richard Pancost receives funding from RCUK, the EU and the Leverhulme Trust.

This article was originally published on The Conversation. (Republished with permission). Read the original article.

Leave a comment

Filed under Reblogs

The Red Herring Fallacy

The idiom ‘red herring’ is used to refer to something that misleads or distracts from the relevant or important issue.  The expression is mainly used to assert that an argument is not relevant to the issue being discussed.

A red herring fallacy is an error in logic where a proposition is, or is intended to be, misleading in order to make irrelevant or false inferences. It includes any logical inference based on fake arguments, intended to replace the lack of real arguments or to replace implicitly the subject of the discussion.  In this way, a red herring is as much a debating tactic as it is a logical fallacy.  It is a fallacy of distraction, and is committed when a listener attempts to divert an arguer from his argument by introducing another topic.  Such arguments have the following form:

Topic A is under discussion.

Topic B is introduced under the guise of being relevant to topic A (when topic B is actually not relevant to topic A).

Topic A is abandoned.

This sort of reasoning is fallacious because merely changing the topic of discussion hardly counts as an argument against a claim.

For instance, ‘I’m entitled to my opinion’ or ‘I have a right to my opinion’ is a common declaration in rhetoric or debate that can be made at some point in a discussion. Whether one has a particular entitlement or right is irrelevant to whether one’s assertion is true or false. To assert the existence of the right is a failure to assert any justification for the opinion.

As an informal fallacy, the red herring falls into a broad class of relevance fallacies. Unlike the strawman fallacy, which is premised on a distortion of the other party’s position, the red herring is a seemingly plausible, though ultimately irrelevant, diversionary tactic.  According to the Oxford English Dictionary, a red herring may be intentional or unintentional – it does not necessarily mean a conscious intent to mislead.

Source: Wikimedia Commons

Source: Wikimedia Commons

Conventional wisdom has long supposed the origin of the idiom ‘red herring’ to be the use of a kipper (a strong-smelling smoked fish) to train hounds to follow a scent, or to divert them from the correct route when hunting; however, modern linguistic research suggests that the term was probably invented in 1807 by English polemicist William Cobbett, referring to one occasion on which he had supposedly used a kipper to divert hounds from chasing a hare, and was never an actual practice of hunters.  The phrase was later borrowed to provide a formal name for the logical fallacy and associated literary device.

Although Cobbett most famously mentioned it, he was not the first to consider red herring for scenting hounds; an earlier reference occurs in the pamphlet ‘Nashe’s Lenten Stuffe’, published in 1599 by the Elizabethan writer Thomas Nashe, in which he says ‘Next, to draw on hounds to a scent, to a red herring skin there is nothing comparable’.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

Leave a comment

Filed under Logical fallacies

Boy/girl hair solution

They both lied.

The child with the black hair is the girl, and the child with the white hair is the boy.

(If only one lied they would both be boys or both be girls)

Leave a comment

Filed under Puzzles

Fallacies of composition and division

The Fallacy of Composition arises when one infers that something is true of the whole from the fact that it is true of some part of the whole.  Conversely, the Fallacy of Division occurs when one infers that something true for the whole must also be true of all or some of its parts.  Both fallacies were described by Aristotle in Sophistical Refutations.

Fallacy of composition

The logical form of the Fallacy of Composition is:

     Premise 1: A is part of B

     Premise 2: A has property X

     Conclusion: Therefore, B has property X.

Two examples of this fallacy are:

  • If someone stands up out of his seat at a baseball game, he can see better.  Therefore, if everyone stands up they can all see better.

  • If a runner runs faster, she can win the race.  Therefore if all the runners run faster, they can all win the race.

Athletic competitions are examples of zero-sum games, wherein the winner wins by preventing all other competitors from winning.

Another example of this fallacy is:

Sodium (Na) and Chlorine (Cl) are both dangerous to humans. Therefore any combination of sodium and chlorine, such as common table salt (NaCl) will be dangerous to humans.

This fallacy is often confused with the fallacy of faulty generalisation, in which an unwarranted inference is made from a statement about a sample to a statement about the population from which it is drawn.

In economics, the Paradox of Thrift is a notable fallacy of composition that is central to Keynesian economics.  Division of labour is another economic example, in which overall productivity can greatly increase when individual workers specialize in doing different jobs.

In a Tragedy of the Commons, an individual can profit by consuming a larger share of a common, shared resource such as fish from the sea; but if too many individuals seek to consume more, they can destroy the resource.

In the Free Rider Problem, an individual can benefit by failing to pay when consuming a share of a public good; but if there are too many such ‘free riders’, eventually there will be no ‘ride’ for anyone.

Fallacy of division

The Fallacy of Division is the converse of the Fallacy of Composition.  The logical form of the Fallacy of Division is:

      Premise 1: A is part of B

      Premise 2: B has property X

      Conclusion: Therefore, A has property X.

An example the fallacy of division is:

A Boeing 747 can fly unaided across the ocean.

A Boeing 747 has jet engines.

Therefore, one of its jet engines can fly unaided across the ocean.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button
 

3 Comments

Filed under Logical fallacies

Boy/girl hair puzzle

A boy and a girl are chatting.

“I am a boy”, said the child with black hair.

“I am a girl”, said the child with white hair.

At least one of them lied. What colour hair does the boy have?

Leave a comment

Filed under Puzzles

Post Office Burglary solution

Solution: Derek was the culprit.

Looking at Brian’s statement if it was Charles, then Brian was lying in his first statement, which makes the second statement true. Which would mean that it was both Charles and Alan. So it can’t be Charles.

Which means Derek was lying in his first statement, which makes the second statement true. Therefore it can’t be Alan.

So Eric’s second statement must be false, meaning his first statement was true, therefore it was Derek.

Leave a comment

Filed under Puzzles

Post Office burglary puzzle

After a local Post Office burglary, five suspects were being interviewed.

Below is a summary of their statements.

Police know that each of them told the truth in one of the statements and lied in the other.

From this information can you tell who committed the crime?

Brian said:
It wasn’t Charles
It was Alan

Derek said:
It was Charles
It wasn’t Alan

Charles said:
It was Brian
It wasn’t Eric

Alan said:
It was Eric
It wasn’t Brian

Eric said:
It was Derek
It was Alan

Leave a comment

Filed under Puzzles

False balance

by Tim Harding

False balance, is a form of deliberate or unintended media bias, in which opposing viewpoints are presented as being more balanced than the evidence warrants.  The media gives weight to evidence and opinions out of proportion to the supporting evidence for each side, and/or withholds information that would establish one side’s claims as baseless.  The impression is given of a scientific or evidence-based debate, where there is actually none. The fallacy is related to false equivalence, but is not quite the same. 

balance

Source: University of California Museum of Paleontology[1]
(used with permission)

This fallacy is also known as ‘Okrent’s law’ named after Daniel Okrent, the first public editor of The New York Times newspaper . He once said: “The pursuit of balance can create imbalance because sometimes something is true,” referring to the phenomenon of the press providing legitimacy to fringe or minority viewpoints in an effort to appear even-handed.

A notorious instance of false balance occurred on 16th August 2012, when WIN TV in Wollongong aired a news story about a measles outbreak in South-West Sydney.  The story appeared to give equal weight to the professional advice of a medical practitioner that everyone should be immunised (against measles); versus an amateur opinion by Meryl Dorey of the so-called Australian Vaccination Network (AVN). Ms. Dorey claimed that ‘All vaccinations in the medical literature have been linked with the possibility of causing autism, not just the measles/mumps/rubella vaccine.’  The TV reporter concluded ‘Choice groups are calling for greater research into the measles vaccine’.

The ABC’s Media Watch program was scathing in its criticism of this news story:

‘Choice groups’. They actually only quoted one group, which claims that it’s in favour of the public having a choice. But Meryl Dorey’s deceptively-named Australian Vaccination Network is in fact an obsessively anti-vaccination pressure group that’s immunised itself against the effect of scientific evidence.  Dorey’s claim about the medical literature linking vaccination and autism is pure, unadulterated baloney.[2]

On the Media Watch web site, there is a link to a long statement by the NSW Director of Health Protection, Dr. Jeremy McAnulty. Amongst other things, he says that:

Any link between measles vaccine and autism has been conclusively discredited by numerous in-depth studies and reviews by credible experts, including the World Health Organisation, the American Academy of Paediatrics and the UK Research Council. Statements erroneously linking measles vaccine and autism were associated with a decline in measles vaccination, which led to a measles outbreak in the UK in the past.[3]

Jonathon Holmes of Media Watch went on to say:

So why on earth, we asked WIN TV, did it include the AVN’s misleading claims in a news story about a measles outbreak?… Medical practitioners – choice groups. One opinion as valid as the other. It’s a classic example of what many – especially despairing scientists – call ‘false balance’ in the media…To put it bluntly, there’s evidence, and there’s bulldust. It’s a journalist’s job to distinguish between them, not to sit on the fence and bleat ‘balance’. Especially when people’s health is at risk.[2]

As the British Medical Journal put it last year in an editorial about the ‘debate’ in the UK:

The media’s insistence on giving equal weight to both the views of the anti-vaccine camp and to the overwhelming body of scientific evidence …made people think that scientists themselves were divided over the safety of the vaccine, when they were not. [4]

Other common examples of false balance in media reporting on science issues include the topics of man-made vs. natural climate change and evolution vs. creationism, as well as medicine vs quackery. As the Understanding Science web site says:

Balanced reporting is generally considered good journalism, and balance does have its virtues. The public should be able to get information on all sides of an issue — but that doesn’t mean that all sides of the issue deserve equal weight. Science works by carefully examining the evidence supporting different hypotheses and building on those that have the most support. Journalism and policies that falsely grant all viewpoints the same scientific legitimacy effectively undo one of the main aims of science: to weigh the evidence.[1]

False balance can sometimes originate from similar motives as sensationalism, where media producers and editors may feel that a story portrayed as a contentious debate will be more commercially successful to pursue than a more accurate account of the issue. However, unlike most other media biases, false balance may ironically stem from a misguided attempt to avoid bias; producers and editors may confuse treating competing views fairly—i.e., in proportion to their actual merits and significance—with treating them equally, by giving them equal time to present their views even when those views may be known beforehand to be based on false or unreliable information. In other words, two sides of a debate are automatically and mistakenly assumed to have equal value regardless of their respective merits.

References

[1] Beware of false balance: Are the views of the scientific community accurately portrayed? Understanding Science. University of California Museum of Paleontology. 25 February 2014 http://undsci.berkeley.edu/article/0_0_0/sciencetoolkit_04

[2] False Balance Leads To Confusion Media Watch Episode 35, 1 October 2012, ABC1. http://www.abc.net.au/mediawatch/transcripts/s3601416.htm

[3] Media Statement – Immunisation. Dr. Jeremy McAnulty, Director of Health Protection, NSW Health, 28th September, 2012.

[4] When balance is bias. British Medical Journal, Christmas Edition, 2011.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

2 Comments

Filed under Logical fallacies