Tag Archives: logic

How to teach all students to think critically

The Conversation

By Peter Ellerton, The University of Queensland

All first year students at the University of Technology Sydney could soon be required to take a compulsory maths course in an attempt to give them some numerical thinking skills.

The new course would be an elective next year and mandatory in 2016 with the university’s deputy vice-chancellor for education and students Shirley Alexander saying the aim is to give students some maths “critical thinking” skills.

This is a worthwhile goal, but what about critical thinking in general?

Most tertiary institutions have listed among their graduate attributes the ability to think critically. This seems a desirable outcome, but what exactly does it mean to think critically and how do you get students to do it?

The problem is that critical thinking is the Cheshire Cat of educational curricula – it is hinted at in all disciplines but appears fully formed in none. As soon as you push to see it in focus, it slips away.

If you ask curriculum designers exactly how critical thinking skills are developed, the answers are often vague and unhelpful for those wanting to teach it.

This is partly because of a lack of clarity about the term itself and because there are some who believe that critical thinking cannot be taught in isolation, that it can only be developed in a discipline context – after all, you have to think critically about something.

So what should any mandatory first year course in critical thinking look like? There is no single answer to that, but let me suggest a structure with four key areas:

  1. argumentation
  2. logic
  3. psychology
  4. the nature of science.

I will then explain that these four areas are bound together by a common language of thinking and a set of critical thinking values.

1. Argumentation

The most powerful framework for learning to think well in a manner that is transferable across contexts is argumentation.

Arguing, as opposed to simply disagreeing, is the process of intellectual engagement with an issue and an opponent with the intention of developing a position justified by rational analysis and inference.

Arguing is not just contradiction.

Arguments have premises, those things that we take to be true for the purposes of the argument, and conclusions or end points that are arrived at by inferring from the premises.

Understanding this structure allows us to analyse the strength of an argument by assessing the likelihood that the premises are true or by examining how the conclusion follows from them.

Arguments in which the conclusion follows logically from the premises are said to be valid. Valid arguments with true premises are called sound. The definitions of invalid and unsound follow.

This gives us a language with which to frame our position and the basic structure of why it seems justified.

2. Logic

Logic is fundamental to rationality. It is difficult to see how you could value critical thinking without also embracing logic.

People generally speak of formal logic – basically the logic of deduction – and informal logic – also called induction.

Deduction is most of what goes on in mathematics or Suduko puzzles and induction is usually about generalising or analogising and is integral to the processes of science.

Logic is fundamental to rationality.

Using logic in a flawed way leads to the committing of the fallacies of reasoning, which famously contain such logical errors as circular reasoning, the false cause fallacy or appeal to popular opinion. Learning about this cognitive landscape is central to the development of effective thinking.

3. Psychology

The messy business of our psychology – how our minds actuality work – is another necessary component of a solid critical thinking course.

One of the great insights of psychology over the past few decades is the realisation that thinking is not so much something we do, as something that happens to us. We are not as in control of our decision-making as we think we are.

We are masses of cognitive biases as much as we are rational beings. This does not mean we are flawed, it just means we don’t think in the nice, linear way that educators often like to think we do.

It is a mistake to think of our minds as just running decision-making algorithms – we are much more complicated and idiosyncratic than this.

How we arrive at conclusions, form beliefs and process information is very organic and idiosyncratic. We are not just clinical truth-seeking reasoning machines.

Our thinking is also about our prior beliefs, our values, our biases and our desires.

4. The nature of science

It is useful to equip students with some understanding of the general tools of evaluating information that have become ubiquitous in our society. Two that come to mind are the nature of science and statistics.

Learning about what the differences are between hypotheses, theories and laws, for example, can help people understand why science has credibility without having to teach them what a molecule is, or about Newton’s laws of motion.

Understanding some basic statistics also goes a long way to making students feel more empowered to tackle difficult or complex issues. It’s not about mastering the content, but about understanding the process.

The language of thinking

Embedded within all of this is the language of our thinking. The cognitive skills – such as inferring, analysing, evaluating, justifying, categorising and decoding – are all the things that we do with knowledge.

If we can talk to students using these terms, with a full understanding of what they mean and how they are used, then teaching thinking becomes like teaching a physical process such as a sport, in which each element can be identified, polished, refined and optimised.

Critical thinking can be studied and taught in part like physical processes.
Flickr/Airman Magazine, CC BY-NC

In much the same way that a javelin coach can freeze a video and talk to an athlete about their foot positioning or centre of balance, a teacher of critical thinking can use the language of cognition to interrogate a student’s thinking in high resolution.

All of these potential aspects of a critical thinking course can be taught outside any discipline context. General knowledge, topical issues and media provide a mountain of grist for the cognitive mill.

General concepts of argumentation and logic are readily transferable between contexts once students are taught to recognise the deeper structures inherent in these fields and to apply them across a variety of situations.

Values

It’s worth understanding too that a good critical thinking education is also an education in values.

Not all values are ethical in nature. In thinking well we value precision, coherence, simplicity of expression, logical structure, clarity, perseverance, honesty in representation and any number of like qualities. If schools are to teach values, why not teach the values of effective thinking?

So, let’s not assume that students will learn to think critically just by learning the methodology of their subjects. Sure it will help, but it’s not an explicit treatment of thinking and is therefore less transferable.

A course that targets effective thinking need not detract from other subjects – in fact it should enhance performance across the board.

But ideally, such a course should not be needed if teachers of all subjects focused on the thinking of their students as well as the content they have to cover.

This article was originally published on The Conversation. (Reblogged with permission). Read the original article.

2 Comments

Filed under Reblogs

Are you a poor logician? Logically, you might never know

The Conversation

By Stephan Lewandowsky, University of Bristol and Richard Pancost, University of Bristol

This is the second article in a series, How we make decisions, which explores our decision-making processes. How well do we consider all factors involved in a decision, and what helps and what holds us back?


It is an unfortunate paradox: if you’re bad at something, you probably also lack the skills to assess your own performance. And if you don’t know much about a topic, you’re unlikely to be aware of the scope of your own ignorance.

Type in any keyword into a scientific search engine and a staggering number of published articles appears. “Climate change” yields 238,000 hits; “tobacco lung cancer” returns 14,500; and even the largely unloved “Arion ater” has earned a respectable 245 publications.

Experts are keenly aware of the vastness of the knowledge landscape in their fields. Ask any scholar and they will likely acknowledge how little they know relative to what is knowable – a realisation that may date back to Confucius.

Here is the catch: to know how much more there is to know requires knowledge to begin with. If you start without knowledge, you also do not know what you are missing out on.

This paradox gives rise to a famous result in experimental psychology known as the Dunning-Kruger effect. Named after Justin Kruger and David Dunning, it refers to a study they published in 1999. They showed that the more poorly people actually performed, the more they over-estimated their own performance.

People whose logical ability was in the bottom 12% (so that 88 out of 100 people performed better than they did) judged their own performance to be among the top third of the distribution. Conversely, the outstanding logicians who outperformed 86% of their peers judged themselves to be merely in the top quarter (roughly) of the distribution, thereby underestimating their performance.

John Cleese argues that this effect is responsible for not only Hollywood but the actions of some mainstream media.

Ignorance is associated with exaggerated confidence in one’s abilities, whereas experts are unduly tentative about their performance. This basic finding has been replicated numerous times in many different circumstances. There is very little doubt about its status as a fundamental aspect of human behaviour.

Confidence and credibility

Here is the next catch: in the eyes of others, what matters most to judge a person’s credibility is their confidence. Research into the credibility of expert witnesses has identified the expert’s projected confidence as the most important determinant in judged credibility. Nearly half of people’s judgements of credibility can be explained on the basis of how confident the expert appears — more than on the basis of any other variable.

Does this mean that the poorest-performing — and hence most over-confident — expert is believed more than the top performer whose displayed confidence may be a little more tentative? This rather discomforting possibility cannot be ruled out on the basis of existing data.

But even short of this extreme possibility, the data on confidence and expert credibility give rise to another concern. In contested arenas, such as climate change, the Dunning-Kruger effect and its flow-on consequences can distort public perceptions of the true scientific state of affairs.

To illustrate, there is an overwhelming scientific consensus that greenhouse gas emissions from our economic activities are altering the Earth’s climate. This consensus is expressed in more than 95% of the scientific literature and it is shared by a similar fraction — 97-98% – of publishing experts in the area. In the present context, it is relevant that research has found that the “relative climate expertise and scientific prominence” of the few dissenting researchers “are substantially below that of the convinced researchers”.

Guess who, then, would be expected to appear particularly confident when they are invited to expound their views on TV, owing to the media’s failure to recognise (false) balance as (actual) bias? Yes, it’s the contrarian blogger who is paired with a climate expert in “debating” climate science and who thinks that hot brick buildings contribute to global warming.

‘I’m not an expert, but…’

How should actual experts — those who publish in the peer-reviewed literature in their area of expertise — deal with the problems that arise from Dunning-Kruger, the media’s failure to recognise “balance” as bias, and the fact that the public uses projected confidence as a cue for credibility?

Speaker of the US House of Representatives John Boehner admitted earlier this year he wasn’t qualified to comment on climate change.

We suggest two steps based on research findings.

The first focuses on the fact of a pervasive scientific consensus on climate change. As one of us has shown, the public’s perception of that consensus is pivotal in determining their acceptance of the scientific facts.

When people recognise that scientists agree on the climate problem, they too accept the existence of the problem. It is for this reason that Ed Maibach and colleagues, from the Centre for Climate Change Communication at George Mason University, have recently called on climate scientists to set the record straight and inform the public that there is a scientific consensus that human-caused climate change is happening.

One might object that “setting the record straight” constitutes advocacy. We do not agree; sharing knowledge is not advocacy and, by extension, neither is sharing the strong consensus behind that knowledge. In the case of climate change, it simply informs the public of a fact that is widely misrepresented in the media.

The public has a right to know that there is a scientific consensus on climate change. How the public uses that knowledge is up to them. The line to advocacy would be crossed only if scientists articulated specific policy recommendations on the basis of that consensus.

The second step to introducing accurate scientific knowledge into public debates and decision-making pertains precisely to the boundary between scientific advice and advocacy. This is a nuanced issue, but some empirical evidence in a natural-resource management context suggests that the public wants scientists to do more than just analyse data and leave policy decisions to others.

Instead, the public wants scientists to work closely with managers and others to integrate scientific results into management decisions. This opinion appears to be equally shared by all stakeholders, from scientists to managers and interest groups.

Advocacy or understanding?

In a recent article, we wrote that “the only unequivocal tool for minimising climate change uncertainty is to decrease our greenhouse gas emissions”. Does this constitute advocacy, as portrayed by some commenters?

It is not. Our statement is analogous to arguing that “the only unequivocal tool for minimising your risk of lung cancer is to quit smoking”. Both statements are true. Both identify a link between a scientific consensus and a personal or political action.

Neither statement, however, advocates any specific response. After all, a smoker may gladly accept the risk of lung cancer if the enjoyment of tobacco outweighs the spectre of premature death — but the smoker must make an informed decision based on the scientific consensus on tobacco.

Likewise, the global public may decide to continue with business as usual, gladly accepting the risk to their children and grandchildren – but they should do so in full knowledge of the risks that arise from the existing scientific consensus on climate change.

Some scientists do advocate for specific policies, especially if their careers have evolved beyond simply conducting science and if they have taken new or additional roles in policy or leadership.

Most of us, however, carefully limit our statements to scientific evidence. In those cases, it is vital that we challenge spurious accusations of advocacy, because such claims serve to marginalise the voices of experts.

Portraying the simple sharing of scientific knowledge with the public as an act of advocacy has the pernicious effect of silencing scientists or removing their expert opinion from public debate. The consequence is that scientific evidence is lost to the public and is lost to the democratic process.

But in one specific way we are advocates. We advocate that our leaders recognise and understand the evidence.

We believe that sober policy decisions on climate change cannot be made when politicians claim that they are not scientists while also erroneously claiming that there is no scientific consensus.

We advocate that our leaders are morally obligated to make and justify their decisions in light of the best available scientific, social and economic understanding.


Click on the links below for other articles in the series, How we make decisions:

The ConversationStephan Lewandowsky receives funding from the Royal Society, from the World University Network (WUN), and from the ‘Great Western 4’ (GW4) consortium of English universities.

Richard Pancost receives funding from RCUK, the EU and the Leverhulme Trust.

This article was originally published on The Conversation. (Republished with permission). Read the original article.

Leave a comment

Filed under Reblogs

Perfect solution fallacy

by Tim Harding

“The perfect is the enemy of the good.” — Voltaire

“Nobody made a greater mistake than he who did nothing
because he could do only a little.”
– Edmund Burke

The Perfect Solution Fallacy (also known as the ‘Nirvana Fallacy) is a false dichotomy that occurs when an argument assumes that a perfect solution to a problem exists; and that a proposed solution should be rejected because some part of the problem would still exist after it were implemented. In other words, that a course of action should be rejected because it is not perfect, even though it is the best option available.   

This fallacy is an example of black and white thinking, in which a person fails to see the complex interplay between multiple component elements of a situation or problem, and as a result, reduces complex problems to a pair of binary extremes. It usually takes the following logical form:

Premise 1: X is what we have or is being proposed.

Premise 2: Y is the perfect situation, even though it may not be achievable.

Conclusion: Therefore, X should be rejected, even if it is the best available option.

Some practical examples of this fallacy are: 

Posit (fallacious): These anti-drunk driving ad campaigns are not going to work. People are still going to drink and drive no matter what.
Rebuttal: Complete eradication of drunk driving is not the expected outcome. The goal is reduction.

Posit (fallacious): Seat belts are a bad idea. People are still going to die in car crashes.
Rebuttal: While seat belts cannot make driving 100% safe, they do reduce one’s likelihood of dying in a car crash.

Other examples include:

This fallacy is often committed by anti-vaccinationists. Their argument is that a particular vaccine only protects 95% of the time, and there is a (very tiny) risk  of adverse side effects.  So they’d rather take their chances with a potentially fatal disease, which is an example of faulty risk assessment. Their fallacious reasoning also ignores the evidence that if there is herd immunity, 95% of the time is more than enough.

On the other hand, striving for perfection is not the same thing as the Perfect Solution Fallacy.  Having a goal of perfection or near perfection, and working towards that goal, is admirable.  However, giving up on the goal because perfection is not attained, despite major improvements being achieved, is fallacious.

Sources
Nirvana fallacy RationalWiki

Parinirvana Buddha (Source: Wikimedia Commons)

Parinirvana Buddha (Source: Wikimedia Commons)

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

13 Comments

Filed under Logical fallacies

Introduction

Welcome to Tim Harding’s blog of writings and talks about logic, rationality, philosophy and skepticism. There are also some reblogs of some of Tim’s favourite posts by other writers, plus some of his favourite quotations and videos This blog has a Facebook connection at The Logical Place.

There are over 2,300 posts here about all sorts of topics – please have a good look around before leaving.

If you are looking for an article about Skepticism, Science and Scientism published in The Skeptic magazine titled ”A Step Too Far?’, it is available here.

If you are looking for an article about the Birth of Experimental Science published in The Skeptic magazine titled ‘Out of the Dark’, it is available here.

If you are looking for an article about the Dark Ages published in The Skeptic magazine titled ‘In the Dark’, it is available here.

If you are looking for an article about the Traditional Chinese Medicine vs. Endangered Species published in The Skeptic magazine titled ‘Bad Medicine’, it is available here.

If you are looking for an article about the rejection of expertise published in The Skeptic magazine titled ‘Who needs to Know?’, it is available here.

If you are looking for an article about Charles Darwin published in The Skeptic magazine titled ‘Darwin’s Missing Link“, it is available here.

If you are looking for an article about the Astronomical Renaissance published in The Skeptic magazine titled ‘Rebirth of the Universe‘, it is available here.

If you are looking for an article about DNA and GM foods published in The Skeptic magazine titled ‘The Good Oil‘, it is available here.

If you are looking for an article about animal welfare published in The Skeptic magazine titled ‘Creature Features‘, it is available here.

If you would like to submit a comment about anything written here, please read our comments policy.

Follow me on Academia.edu

Copyright notice: © All rights reserved. Except for personal use or as permitted under the Australian Copyright Act, no part of this website may be reproduced, stored in a retrieval system, communicated or transmitted in any form or by any means without prior written permission (except as an authorised reblog). All inquiries should be made to the copyright owner, Tim Harding at tim.harding@yandoo.com, or as attributed on individual blog posts.

If you find the information on this blog useful, you might like to consider supporting us. Make a Donation Button

3 Comments

Filed under Uncategorized

The Grandfather Paradox

The Grandfather Paradox is one of several metaphysical arguments that attempt to prove that time travel is logically impossible (whether it is physically possible is a question left by philosophers to the physicists).  These arguments all have the same basic form:

Premise 1: If time travel is possible, then X must be possible.

Premise 2: X is not possible.

Conclusion: Therefore, time travel is impossible.

The Grandfather Paradox is described as follows: a time machine is invented enabling a time traveller to go back in time to before his grandfather had fathered offspring.  At that time, the time traveller kills his grandfather, and therefore, one of the time traveller’s parents would never exist and thus the time traveller himself would never exist either.  If he is never born, then he is unable to travel back through time and kill his grandfather, which means he would be born, and so on.

The paradox is also described in this video cartoon.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

 

1 Comment

Filed under Paradoxes

What is logic?

The word ‘logic‘ is not easy to define, because it has slightly different meanings in various applications ranging from philosophy, to mathematics to computer science. In philosophy, logic’s main concern is with the validity or cogency of arguments. The essential difference between informal logic and formal logic is that informal logic uses natural language, whereas formal logic (also known as symbolic logic) is more complex and uses mathematical symbols to overcome the frequent ambiguity or imprecision of natural language. Reason is the application of logic to actual premises, with a view to drawing valid or sound conclusions. Logic is the rules to be followed, independently of particular premises, or in other words using abstract premises designated by letters such as P and Q.

So what is an argument? In everyday life, we use the word ‘argument’ to mean a verbal dispute or disagreement (which is actually a clash between two or more arguments put forward by different people). This is not the way this word is usually used in philosophical logic, where arguments are those statements a person makes in the attempt to convince someone of something, or present reasons for accepting a given conclusion. In this sense, an argument consist of statements or propositions, called its premises, from which a conclusion is claimed to follow (in the case of a deductive argument) or be inferred (in the case of an inductive argument). Deductive conclusions usually begin with a word like ‘therefore’, ‘thus’, ‘so’ or ‘it follows that’.

A good argument is one that has two virtues: good form and all true premises. Arguments can be either deductiveinductive  or abductive. A deductive argument with valid form and true premises is said to be sound. An inductive argument based on strong evidence is said to be cogent. The term ‘good argument’ covers all three of these types of arguments.

Deductive arguments

A valid argument is a deductive argument where the conclusion necessarily follows from the premises, because of the logical structure of the argument. That is, if the premises are true, then the conclusion must also be true. Conversely, an invalid argument is one where the conclusion does not logically follow from the premises. However, the validity or invalidity of arguments must be clearly distinguished from the truth or falsity of its premises. It is possible for the conclusion of a valid argument to be true, even though one or more of its premises are false. For example, consider the following argument:

Premise 1: Napoleon was German
Premise 2: All Germans are Europeans
Conclusion: Therefore, Napoleon was European

The conclusion that Napoleon was European is true, even though Premise 1 is false. This argument is valid because of its logical structure, not because its premises and conclusion are all true (which they are not). Even if the premises and conclusion were all true, it wouldn’t necessarily mean that the argument was valid. If an argument has true premises and its form is valid, then its conclusion must be true.

Deductive logic is essentially about consistency. The rules of logic are not arbitrary, like the rules for a game of chess. They exist to avoid internal contradictions within an argument. For example, if we have an argument with the following premises:

Premise 1: Napoleon was either German or French
Premise 2: Napoleon was not German

The conclusion cannot logically be “Therefore, Napoleon was German” because that would directly contradict Premise 2. So the logical conclusion can only be: “Therefore, Napoleon was French”, not because we know that it happens to be true, but because it is the only possible conclusion if both the premises are true. This is admittedly a simple and self-evident example, but similar reasoning applies to more complex arguments where the rules of logic are not so self-evident. In summary, the rules of logic exist because breaking the rules would entail internal contradictions within the argument.

Inductive arguments

An inductive argument is one where the premises seek to supply strong evidence for (not absolute proof of) the truth of the conclusion. While the conclusion of a sound deductive argument is supposed to be certain, the conclusion of a cogent inductive argument is supposed to be probable, based upon the evidence given. An example of an inductive argument is: 

Premise 1: Almost all people are taller than 26 inches
Premise 2: George is a person
Conclusion: Therefore, George is almost certainly taller than 26 inches

Whilst an inductive argument based on strong evidence can be cogent, there is some dispute amongst philosophers as to the reliability of induction as a scientific method. For example, by the problem of induction, no number of confirming observations can verify a universal generalization, such as ‘All swans are white’, yet it is logically possible to falsify it by observing a single black swan.

Abductive arguments

Abduction may be described as an “inference to the best explanation”, and whilst not as reliable as deduction or induction, it can still be a useful form of reasoning. For example, a typical abductive reasoning process used by doctors in diagnosis might be: “this set of symptoms could be caused by illnesses X, Y or Z. If I ask some more questions or conduct some tests I can rule out X and Y, so it must be Z.

Incidentally, the doctor is the one who is doing the abduction here, not the patient. By accepting the doctor’s diagnosis, the patient is using inductive reasoning that the doctor has a sufficiently high probability of being right that it is rational to accept the diagnosis. This is actually an acceptable form of the Argument from Authority (only the deductive form is fallacious).

References:

Hodges, W. (1977) Logic – an introduction to elementary logic (2nd ed. 2001) Penguin, London.
Lemmon, E.J. (1987) Beginning Logic. Hackett Publishing Company, Indianapolis.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

18 Comments

Filed under Essays and talks

Argument from Popularity

by Tim Harding

The informal fallacy known as argumentum ad populum means ’argument from popularity’ or ‘appeal to the people’.  This fallacy is essentially the same as ad numerum, appeal to the gallery, appeal to the masses, common practice, past practice, traditional knowledge, peer pressure, conventional wisdom, the bandwagon fallacy; and lastly truth by consensus, of which I shall say more later.

The Argument from Popularity fallacy may be defined as when an advocate asserts that because the great majority of people in general agree with his or her position on an issue, he or she must be right.[1]  In other words, if you suggest too strongly that someone’s claim or argument is correct simply because it’s what most people believe, then you’ve committed the fallacy of appeal to the people.  Similarly, if you suggest too strongly that someone’s claim or argument is mistaken simply because it’s not what most people believe, then you’ve also committed the fallacy.

Agreement with popular opinion is not necessarily a reliable sign of truth, and deviation from popular opinion is not necessarily a reliable sign of error, but if you assume it is and do so with enthusiasm, then you’re guilty of committing this fallacy.  The ‘too strongly’ mentioned above is important in the description of the fallacy because what most everyone believes is, for that reason, often likely to be true, all things considered.  However, the fallacy occurs when this degree of support is used as justification for the truth of the belief.[2]

It often happens that a true proposition is believed to be true by most people, but this is not the reason it is true.  In other words, correlation does not imply causation, and this confusion is the source of the fallacy, in my view.  For example, nearly every sane person believes that the proposition 1+1=2 is true, but that is not why it is true.  We can try doing empirical experiments by counting objects, and although this exercise is highly convincing, it is still only inductive reasoning rather than proof.  Put simply, the proposition 1+1=2 is true because it has been mathematically proven to be true.  But my purpose here is not to convince you that 1+1=2.  My real point is that the proportion of people who believe that 1+1=2 is true is irrelevant to the truth or falsity of this proposition.

Let us now consider a belief where its truth is less obvious.  Before the work of Copernicus and Galileo in the 15th and 16th centuries, most people (including the Roman Catholic Church) believed that the Sun revolved around the Earth, rather than vice versa as we now know through science.  So the popular belief in that case was false.

This fallacy is also common in marketing e.g. “Brand X vacuum cleaners are the country’s most popular brand; so buy Brand X vacuum cleaners”.  How often have we heard a salesperson try to argue that because a certain product is very popular this year, we should buy it?  Not because it is a good quality product representing value for money, but simply because it is popular?  Weren’t those ‘power balance wrist bands’ also popular before they were exposed as a sham by the ACCC?[3]

For another example, a politician might say ‘Nine out of ten of my constituents oppose the bill, therefore it is bad legislation.’  Now, this might be a political reason for voting against the bill, but it is not a valid argument that the bill is bad legislation.  To validly argue that bill is bad legislation, the politician should adduce rational arguments against the bill on its merits or lack thereof, rather than merely claim that the bill is politically unpopular.

In philosophy, truth by consensus is the process of taking statements to be true simply because people generally agree upon them.  Philosopher Nigel Warburton argues that the truth by consensus process is not a reliable way of discovering truth.  That there is general agreement upon something does not make it actually true.  There are several reasons for this.

One reason Warburton discusses is that people are prone to wishful thinking.  People can believe an assertion and espouse it as truth in the face of overwhelming evidence and facts to the contrary, simply because they wish that things were so.  Another is that people are gullible, and easily misled.

Another unreliable method of determining truth is by determining the majority opinion of a popular vote.  This is unreliable because on many questions the majority of people are ill-informed.  Warburton gives astrology as an example of this.  He states that while it may be the case that the majority of the people of the world believe that people’s destinies are wholly determined by astrological mechanisms, given that most of that majority have only sketchy and superficial knowledge of the stars in the first place, their views cannot be held to be a significant factor in determining the truth of astrology.  The fact that something ‘is generally agreed or that ‘most people believe’ something should be viewed critically, asking the question why that factor is considered to matter at all in an argument over truth.  He states that the simple fact that a majority believes something to be true is unsatisfactory justification for believing it to be true.[4]

In contrast, rational arguments that the claims of astrology are false include firstly, because they are incompatible with science; secondly, because there is no credible causal mechanism by which they could possibly be true; thirdly, because there is no empirical evidence that they are true despite objective testing; and fourthly, because the star signs used by astrologers are all out of kilter with the times of the year and have been so for the last two or three thousand years.

Another example is the claims of so-called ‘alternative medicines’ where judging by their high sales figures relative to prescription medicines, it is quite possible that a majority of the population believe these claims to be true.  Without going into details here, we skeptics have good reasons for believing that many of these claims are false.

Warburton makes a distinction between the fallacy of truth by consensus and the process of democracy in decision making.  Descriptive statements of the way things are, are either true or false – and verifiable true statements are called facts.  Normative statements deal with the way things ought to be, and are neither true nor false.  In a political context, statements of the way things ought to be are known as policies.  Political policies may be described as good or bad, but not true or false.  Democracy is preferable to other political processes not because it results in truth, but because it provides for majority rule, equal participation by multiple special-interest groups, and the avoidance of tyranny.

In summary, the Argument from Popularity fallacy confuses correlation with causality; and thus popularity with truth.  Just because most people believe that a statement is true, it does not logically follow that the statement is in fact true.  With the exception of the demonstrably false claims of astrology and so-called ‘alternative medicines’, popular statements are often more likely to be true than false (‘great minds think alike’); but they are not necessarily true and can sometimes be false.  They are certainly not true merely because they are popular.  This fallacy is purely concerned with the logical validity of arguments and the justification for the truth of propositions.  The identification of this fallacy is not an argument against democracy or whether popular political policies should or should not be pursued.

References:

Clark J. and Clark T., (2005) Humbug! The skeptic’s field guide to spotting fallacies in thinking Nifty Books, Capalaba.


[1] Clark and Clark, 2005.

[2] Feiser and Dowden et al, 2011.

[4] Warburton, 2000.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

3 Comments

Filed under Logical fallacies

Two twin fallacies

by Tim Harding

Seasoned skeptics may be familiar with two well-known logical fallacies:

  1. The ‘Argument from Personal Abuse’ or the ad hominem argument (playing the man instead of the ball); and
  2. The deductive form of the ‘Argument from Authority‘ or ‘Appeal to Authority’. (The inductive form is not necessarily a fallacy).

When you think about it, these fallacies make the same error of logic – they both draw conclusions from the character or motives of the arguer rather than the premises and form of the argument.

In informal logic, these are known as fallacies of defective induction, where it is argued that a statement is true or false because the statement is made by a person or source that is commonly regarded as authoritative or not authoritative.  The most general structure of this argument is:

   Premise 1: Source A says that statement p is true.
   Premise 2: Source A is authoritative.
   Conclusion: Therefore, statement p is true.

Conversely:

  Premise 1: Source B says that statement p is true.
  Premise 2: Source B is a ‘bloody idiot’.
  Conclusion: Therefore, statement p is false.

We skeptics are often skeptical of conspiracy theories, such as the so-called Moon Landings Hoax.  Conspiracy theories like these are often a special case of the ad hominem argument, for example:

   Premise 1: NASA claims to have landed men on the Moon;
   Premise 2: Governments can’t be trusted;
   Premise 3: NASA is a government agency;
   Conclusion: Therefore, NASA’s claim is false.

These arguments are fallacious because the truth or falsity of the claim is not necessarily related to the attributes or motives of the claimant, and because the premises can be true, and the conclusion false (an authoritative claim can turn out to be false).  If the premises can be true, but the conclusion can be false, then the argument is logically invalid. (A logically valid argument is one where if the premises are true, then the conclusion must be true by virtue of the argument’s logical structure).

An exception can be made for an ad hominem argument if the attack goes to the credibility of the arguer. For instance, the argument may depend on its presenter’s claim that he’s an expert. (That is, the ad hominem argument is undermining a legitimate Argument From Authority). Trial judges allow this category of refutation in appropriate cases.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

4 Comments

Filed under Logical fallacies

Faulty generalisation

 by Tim Harding

The tabloid media often commits a common fallacy known as Faulty Generalisation.  Other terms for this fallacy include false generalisation, hasty generalisation, over-generalisation, unscientific conclusion and even superstition. A fallacy occurs when a general rule is derived from a particular case or anecdote.

faulty generalisation shapesV2

For simplicity, this fallacy may be divided into two sub-fallacies – false generalisation and over-generalisation.

In a false generalisation, the premises of an argument are weakly related to its conclusion; but do not sufficiently justify the conclusion.  For example, a person might argue: “I don’t believe that smoking causes cancer, because my uncle Bert smoked like a chimney and yet he lived until aged 93”.  Conclusions are drawn about an entire population from too small a sample of the population – in this case a sample size of one. Contrast this with the enormous sample size of tens of thousands of smokers (plus control samples) that were used in the scientific epidemiological studies that conclusively established the causal link between smoking and cancer.  So in this case, the person committing this fallacy is giving more weight to a personal anecdote than the findings of science.

The extreme feminist slogan ‘All men are rapists’ is clearly a false generalisation.  Other claims such as ‘All men are responsible for the attitudes that lead to rape’ are a little more subtle, but are still false generalisations. Rape is a crime, like murder and bank robbing, and statistics show that only a small percentage of the population are criminals. Just because a person happens to be born of one gender, it does not make that person responsible for criminal attitudes, let alone crimes committed by other persons of the same gender.

In an over-generalisation, conclusions are drawn from an apparent trend to the entire population.  For example, if there are a couple of tragic road crashes on a weekend in which several people are killed, a senior police officer might say at a press conference something along the lines that drivers are becoming more careless.  A sample of one weekend’s road crashes is no evidence of any such trend – in fact, the long term trend is that the annual road toll is decreasing.

In technical logic terms, these are fallacies of defective induction, where the argument typically takes the following form:

   Premise: The proportion Q of the sample has attribute A.

   Conclusion: Therefore, the proportion Q of the population has attribute A.

Statistical methods are used to calculate the necessary sample size before conclusions can validly be drawn about a population.  For example, a random sample in excess of 1000 people is used in opinion polling; and even then there is a stated error margin in the order of plus or minus two per cent.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

Leave a comment

Filed under Logical fallacies

Begging the question

by Tim Harding

Setting aside for a moment whatever personal views we might have about the morality of abortion, consider the following argument:

   Premise 1: Murder is morally wrong;

   Conclusion: Therefore, abortion is morally wrong.

Is this argument logically valid?  Probably not, but let’s analyse the form of the argument to make sure:

    Premise 1: A is B;

    Conclusion: Therefore, C is B.

An argument of this form is logically invalid, i.e. the conclusion can be false even when the premises are true.  Or in other words, the conclusion does not necessarily follow from the premises.

OK, how about this argument:

   Premise 1: Abortion is murder;

   Premise 2: Murder is morally wrong;

   Conclusion: Therefore, abortion is morally wrong.

Is this argument valid?  The form of this argument is:

   Premise 1: A is B;

   Premise 2: B is C;

   Conclusion: Therefore, A is C.

In this second case, the conclusion necessarily follows from the premises. That is, if the premises are true (which they may or may not be), then the conclusion must be true by virtue of the logical structure of the argument. So this form of this argument is logically valid. However, if one or more of the premises is false, then the conclusion may also be false (even though the argument is valid). This example also illustrates the importance difference between validity and truth.

The first argument was missing a premise, which when included, turned the argument from an invalid one to a valid one.  This is an instance of the formal fallacy known as the Fallacy of the Unstated Major Premise or ‘Begging the Question’.

The Latin name for it is petitio principia, meaning a request for the beginning or premise’.  Or in other words, this fallacy is committed when one makes an argument assuming a premise that is not explicitly stated.  It is not to be confused with the meaning of ‘raising the question’, which is sometimes mistakenly referred to as ‘begging the question’ in the popular media.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

Leave a comment

Filed under Logical fallacies