Tag Archives: skepticism
12 basic things Australians should have learned at school
Filed under Essays and talks
Skepticism, Science and Scientism
By Tim Harding B.Sc., B.A.
(An edited version of this essay was published in The Skeptic magazine,
September 2017, Vol 37 No 3)
In these challenging times of anti-science attitudes and ‘alternative facts’, it may sound strange to be warning against excessive scientific exuberance. Yet to help defend science from these attacks, I think we need to encourage scientists to maintain their credibility amongst non-scientists.
In my last article for The Skeptic (‘I Think I Am’, March 2017), I traced the long history of skepticism over the millennia. I talked about the philosophical skepticism of Classical Greece, the skepticism of Modern Philosophy dating from Descartes, through to the contemporary form of scientific skepticism that our international skeptical movement now largely endorses. I quoted Dr. Steven Novella’s definition of scientific skepticism as ‘the application of skeptical philosophy, critical thinking skills, and knowledge of science and its methods to empirical claims, while remaining agnostic or neutral to non-empirical claims (except those that directly impact the practice of science).’
Despite the recent growth of various anti-science movements, science is still widely regarded as the ‘gold standard’ for the discovery of empirical knowledge, that is, knowledge derived from observations and experiments. Even theoretical physics is supposed to be empirically verifiable in principle when the necessary technology becomes available, as in the case of the Higgs boson and Einstein’s gravitational waves. But empirical observations are not our only source of knowledge – we also use reasoning to make sense of our observations and to draw valid conclusions from them. We can even generate new knowledge through the application of reasoning to what we already know, as I shall discuss later.
Most skeptics (with a ‘k’) see science as a kind of rational antidote to the irrationality of pseudoscience, quackery and other varieties of woo. So we naturally tend to support and promote science for this purpose. But sometimes we can go too far in our enthusiasm for science. We can mistakenly attempt to extend the scope of science beyond its empirical capabilities, into other fields of inquiry such as philosophy and politics – even ethics. If only a small number of celebrity scientists lessen their credibility by making pronouncements beyond their individual fields of expertise, they render themselves vulnerable to attack by our opponents who are looking for any weaknesses in their arguments. In doing so, they can unintentionally undermine public confidence in science, and by extension, scientific skepticism.
The pitfalls of crude positivism
Logical positivism (sometimes called ‘logical empiricism’) was a Western philosophical movement in the first half of the 20th century with a central thesis of verificationism; which was a theory of knowledge which asserted that only propositions verifiable through empirical observation are meaningful.
One of the most prominent proponents of logical positivism was Professor Sir Alfred Ayer (1910-1989) pictured below. Ayer is best known for popularising the verification principle, in particular through his presentation of it in his bestselling 1936 book Language, Truth, and Logic. Ayer’s thesis was that a proposition can only be meaningful if it has verifiable empirical content, otherwise it is either a priori (known by deduction) or nonsensical. Ayer’s philosophical ideas were deeply influenced by those of the Vienna Circle and the 18th century empiricist philosopher David Hume.
James Fodor, who is a young Melbourne science student, secularist and skeptic has critiqued a relatively primitive form of logical positivism, which he calls ‘crude positivism’. He describes this as a family of related and overlapping viewpoints, rather than a single well-defined doctrine, the three most commonly-encountered components of which are the following:
(1) Strict evidentialism: the ultimate arbiter of knowledge is evidence, which should determine our beliefs in a fundamental and straightforward way; namely that we believe things if and only if there is sufficient evidence for them.
(2) Narrow scientism: the highest, or perhaps only, legitimate form of objective knowledge is that produced by the natural sciences. The social sciences, along with non-scientific pursuits, either do not produce real knowledge, or only knowledge of a distinctly inferior sort.
(3) Pragmatism: science owes its special status to its unique ability to deliver concrete, practical results: it ‘works’. Philosophy, theology, and other such fields of inquiry do not produce ‘results’ in this same way, and thus have no special status.
Somewhat controversially, Fodor classifies Richard Dawkins, Sam Harris, Peter Boghossian, Neil de Grasse Tyson, Lawrence Krauss, and Stephen Hawking as exponents of crude positivism when they stray outside their respective fields of scientific expertise into other fields such as philosophy and social commentary. (Although to be fair, Lawrence Krauss wrote an apology in a 2012 issue of Scientific American, for seemingly dismissing the importance of philosophy in a previous interview he gave to The Atlantic).
Fodor’s component (1) is a relatively uncontroversial viewpoint shared by most scientists and skeptics. Nevertheless, Fodor cautions that crude positivists often speak as if evidence is self-interpreting, such that a given piece of evidence automatically picks out one singular state of affairs over all other possibilities. In practice, however, this is almost never the case because the interpretation of evidence nearly always requires an elaborate network of background knowledge and pre-existing theory. For instance, the raw data from most scientific observations or experiments are unintelligible without the use of background scientific theories and methodologies.
It is Fodor’s components (2) and (3) that are likely to be more controversial, and so I will now discuss them in more detail.
The folly of scientism
What is ‘scientism’ – and how is it different from the natural enthusiasm for science that most skeptics share? Unlike logical positivism, scientism is not a serious intellectual movement. The term is almost never used by its exponents to describe themselves. Instead, the word scientism is mainly used pejoratively when criticising scientists for attempting to extend the boundaries of science beyond empiricism.
Warwick University philosopher Prof. Tom Sorell has defined scientism as: ‘a matter of putting too high a value on natural science in comparison with other branches of learning or culture.’ In summary, a commitment to one or more of the following statements lays one open to the charge of scientism:
- The natural sciences are more important than the humanities for an understanding of the world in which we live, or even all we need to understand it;
- Only a scientific methodology is intellectually acceptable. Therefore if the humanities are to be a genuine part of human knowledge they must adopt it; and
- Philosophical problems are scientific problems and should only be dealt with as such.
At the 2016 Australian Skeptics National Convention, former President of Australian Skeptics Inc., Peter Bowditch, criticized a recent video made by TV science communicator Bill Nye in which he responded to a student asking him: ‘Is philosophy meaningless?’ In his rambling answer, Nye confused questions of consciousness and reality, opined that philosophy was irrelevant to answering such questions, and suggested that our own senses are more reliable than philosophy. Peter Bowditch observed that ‘the problem with his [Nye’s] comments was not that they were just wrong about philosophy; they were fractally wrong. Nye didn’t know what he was talking about. His concept of philosophy was extremely naïve.’ Bill Nye’s embarrassing blunder is perhaps ‘low hanging fruit’; and after trenchant criticism, Nye realised his error and began reading about philosophy for the first time.
Some distinguished scientists (not just philosophers) are becoming concerned about the pernicious influence of scientism. Biological sciences professor Austin Hughes (1949-2015) wrote ‘the temptation to overreach, however, seems increasingly indulged today in discussions about science. Both in the work of professional philosophers and in popular writings by natural scientists, it is frequently claimed that natural science does or soon will constitute the entire domain of truth. And this attitude is becoming more widespread among scientists themselves. All too many of my contemporaries in science have accepted without question the hype that suggests that an advanced degree in some area of natural science confers the ability to pontificate wisely on any and all subjects.’
Prof. Hughes notes that advocates of scientism today claim the sole mantle of rationality, frequently equating science with reason itself. Yet it seems the very antithesis of reason to insist that science can do what it cannot, or even that it has done what it demonstrably has not. He writes ‘as a scientist, I would never deny that scientific discoveries can have important implications for metaphysics, epistemology, and ethics, and that everyone interested in these topics needs to be scientifically literate. But the claim that science and science alone can answer longstanding questions in these fields gives rise to countless problems.’
Limitations of science
The editor of the philosophical journal Think and author of The Philosophy Gym, Prof. Stephen Law has identified two kinds of questions to which it is very widely supposed that science cannot supply answers:
Firstly, philosophical questions are for the most part conceptual, rather than scientific or empirical. They are usually answered by the use of reasoning rather than empirical observations. For example, Galileo conducted a famous thought experiment by reason alone. Imagine two objects, one light and one heavier than the other one, are connected to each other by a string. Drop these linked objects from the top of a tower. If we assume heavier objects do indeed fall faster than lighter ones (and conversely, lighter objects fall slower), the string will soon pull taut as the lighter object retards the fall of the heavier object. But the linked objects together are heavier than the heavy object alone, and therefore should fall faster. This logical contradiction leads one to conclude the assumption about heavier objects falling faster is false. Galileo figured this conclusion out in his head, without the assistance of any empirical experiment or observation. In doing so, he was employing philosophical rather than scientific methods.
Secondly, moral questions are about what we ought or ought not to do. In contrast, the empirical sciences, on their own, appear capable of establishing only what is the case. This is known as the ‘is/ought gap’. Science can provide us with factual evidence that might influence our ethical judgements but it cannot provide us with the necessary ethical values or principles. For example, science can tell us how to build nuclear weapons, but it cannot tell us whether or not they should ever be used and under what circumstances. Clinical trials are conducted in medical science, often using treatment groups versus control groups of patients. It is bioethics rather than science that provides us with the moral principles for obtaining informed patient consent for participation in such clinical trials, especially when we consider that control groups of patients are being denied treatments that could be to their benefit.
I have given the above examples not to criticise science in any way, but simply to point out that science has limitations, and that there is a place for other fields of inquiry in addition to science.
Is pragmatism enough?
Coming back to Fodor’s component (3) of crude positivism, he makes a good point that a scientific explanation that ‘works’ is not necessarily true. For instance, Claudius Ptolemy of Alexandria (c. 90CE – c. 168CE) explained how to predict the behavior of the planets by introducing ad hoc notions of the deferent, equant and epicycles to the geocentric model of what is now known as our solar system. This model was completely wrong, yet it produced accurate predictions of the motions of the planets – it ‘worked’. Another example was Gregor Mendel’s 19th century genetic experiments on wrinkled peas. These empirical experiments adequately explained the observed phenomena of genetic variation without even knowing what genes were or where they were located in living organisms.
Schematic diagram of Ptolemy’s incorrect geocentric model of the cosmos
James Fodor argues that just because scientific theories can be used to make accurate predictions, this does not necessarily mean that science alone always provides us with accurate descriptions of reality. There is even a philosophical theory known as scientific instrumentalism, which holds that as long as a scientific theory makes accurate predictions, it does not really matter whether the theory corresponds to reality. The psychology of perception and the philosophies of mind and metaphysics could also be relevant. Fodor adds that many of the examples of science ‘delivering results’ are really applications of engineering and technology, rather than the discovery process of science itself.
Fodor concludes that if the key to the success of the natural sciences is adherence to rational methodologies and inferences, then it is those successful methods that we should focus on championing, whatever discipline they may be applied in, rather than the data sets collected in particular sciences.
Implications for science and skepticism
Physicist Ian Hutchison writes ‘the health of science is in fact jeopardised by scientism, not promoted by it. At the very least, scientism provokes a defensive, immunological, aggressive response in other intellectual communities, in return for its own arrogance and intellectual bullyism. It taints science itself by association’. Hutchinson suggests that perhaps what the public is rejecting is not actually science itself, but a worldview that closely aligns itself with science — scientism. By disentangling these two concepts, we have a much better chance for enlisting public support for scientific research.
The late Prof. Austin Hughes left us with a prescient warning that continued insistence on the universal and exclusive competence of science will serve only to undermine the credibility of science as a whole. The ultimate outcome will be an increase in science denialism that questions the ability of science to address even the questions legitimately within its sphere of competence.
Ayer, Alfred. J. (1936), Language Truth and Logic, London: Penguin.
Bowditch, Peter ‘Is Philosophy Dead?’ Australasian Science July/August 2017.
Fodor, James ‘Not so simple’, Australian Rationalist, v. 103, December 2016, pp. 32–35.
Harding, Tim ‘I Think I Am’, The Skeptic, Vol. 37 No. 1. March 2017, pp. 40-44.
Hughes, Austin L ‘The Folly of Scientism’, The New Atlantis, Number 37, Fall 2012, pp. 32-50.
Hutchinson, Ian. (2011) Monopolizing Knowledge: A Scientist Refutes Religion-Denying, Reason-Destroying Scientism. Belmont, MA: Fias Publishing.
Krauss, Lawrence ‘The Consolation of Philosophy’ Scientific American Mind, April 27, 2012.
Law, Stephen, ‘Scientism, the limits of science, and religion’ Center for Inquiry (2016), Amherst, NY.
Novella, Steven (15 February 2013). ‘Scientific Skepticism, Rationalism, and Secularism’. Neurologica (blog). Retrieved 12 February 2017.
Sorell, Thomas (1994), Scientism: Philosophy and the Infatuation with Science, London: Routledge.
If you find the information on this blog useful, you might like to consider supporting us.
Filed under Essays and talks
How do you know that what you know is true? That’s epistemology
Peter Ellerton, The University of Queensland
How do you know what the weather will be like tomorrow? How do you know how old the Universe is? How do you know if you are thinking rationally?
These and other questions of the “how do you know?” variety are the business of epistemology, the area of philosophy concerned with understanding the nature of knowledge and belief.
Epistemology is about understanding how we come to know that something is the case, whether it be a matter of fact such as “the Earth is warming” or a matter of value such as “people should not just be treated as means to particular ends”.
It’s even about interrogating the odd presidential tweet to determine its credibility.
Read more: Facts are not always more important than opinions: here’s why
Epistemology doesn’t just ask questions about what we should do to find things out; that is the task of all disciplines to some extent. For example, science, history and anthropology all have their own methods for finding things out.
Epistemology has the job of making those methods themselves the objects of study. It aims to understand how methods of inquiry can be seen as rational endeavours.
Epistemology, therefore, is concerned with the justification of knowledge claims.
The need for epistemology
Whatever the area in which we work, some people imagine that beliefs about the world are formed mechanically from straightforward reasoning, or that they pop into existence fully formed as a result of clear and distinct perceptions of the world.
But if the business of knowing things was so simple, we’d all agree on a bunch of things that we currently disagree about – such as how to treat each other, what value to place on the environment, and the optimal role of government in a society.
That we do not reach such an agreement means there is something wrong with that model of belief formation.
It is interesting that we individually tend to think of ourselves as clear thinkers and see those who disagree with us as misguided. We imagine that the impressions we have about the world come to us unsullied and unfiltered. We think we have the capacity to see things just as they really are, and that it is others who have confused perceptions.
As a result, we might think our job is simply to point out where other people have gone wrong in their thinking, rather than to engage in rational dialogue allowing for the possibility that we might actually be wrong.
But the lessons of philosophy, psychology and cognitive science teach us otherwise. The complex, organic processes that fashion and guide our reasoning are not so clinically pure.
Not only are we in the grip of a staggeringly complex array of cognitive biases and dispositions, but we are generally ignorant of their role in our thinking and decision-making.
Combine this ignorance with the conviction of our own epistemic superiority, and you can begin to see the magnitude of the problem. Appeals to “common sense” to overcome the friction of alternative views just won’t cut it.
We need, therefore, a systematic way of interrogating our own thinking, our models of rationality, and our own sense of what makes for a good reason. It can be used as a more objective standard for assessing the merit of claims made in the public arena.
This is precisely the job of epistemology.
Epistemology and critical thinking
One of the clearest ways to understand critical thinking is as applied epistemology. Issues such as the nature of logical inference, why we should accept one line of reasoning over another, and how we understand the nature of evidence and its contribution to decision making, are all decidedly epistemic concerns.
The American philosopher Harvey Siegel points out that these questions and others are essential in an education towards thinking critically.
By what criteria do we evaluate reasons? How are those criteria themselves evaluated? What is it for a belief or action to be justified? What is the relationship between justification and truth? […] these epistemological considerations are fundamental to an adequate understanding of critical thinking and should be explicitly treated in basic critical thinking courses.
To the extent that critical thinking is about analysing and evaluating methods of inquiry and assessing the credibility of resulting claims, it is an epistemic endeavour.
Engaging with deeper issues about the nature of rational persuasion can also help us to make judgements about claims even without specialist knowledge.
For example, epistemology can help clarify concepts such as “proof”, “theory”, “law” and “hypothesis” that are generally poorly understood by the general public and indeed some scientists.
In this way, epistemology serves not to adjudicate on the credibility of science, but to better understand its strengths and limitations and hence make scientific knowledge more accessible.
Epistemology and the public good
One of the enduring legacies of the Enlightenment, the intellectual movement that began in Europe during the 17th century, is a commitment to public reason. This was the idea that it’s not enough to state your position, you must also provide a rational case for why others should stand with you. In other words, to produce and prosecute an argument.
Read more: How to teach all students to think critically
This commitment provides for, or at least makes possible, an objective method of assessing claims using epistemological criteria that we can all have a say in forging.
That we test each others’ thinking and collaboratively arrive at standards of epistemic credibility lifts the art of justification beyond the limitations of individual minds, and grounds it in the collective wisdom of reflective and effective communities of inquiry.
The sincerity of one’s belief, the volume or frequency with which it is stated, or assurances to “believe me” should not be rationally persuasive by themselves.
If a particular claim does not satisfy publicly agreed epistemological criteria, then it is the essence of scepticism to suspend belief. And it is the essence of gullibility to surrender to it.
A defence against bad thinking
There is a way to help guard against poor reasoning – ours and others’ – that draws from not only the Enlightenment but also from the long history of philosophical inquiry.
So the next time you hear a contentious claim from someone, consider how that claim can be supported if they or you were to present it to an impartial or disinterested person:
- identify reasons that can be given in support of the claim
- explain how your analysis, evaluation and justification of the claim and of the reasoning involved are of a standard worth someone’s intellectual investment
- write these things down as clearly and dispassionately as possible.
In other words, make the commitment to public reasoning. And demand of others that they do so as well, stripped of emotive terms and biased framing.
If you or they cannot provide a precise and coherent chain of reasoning, or if the reasons remain tainted with clear biases, or if you give up in frustration, it’s a pretty good sign that there are other factors in play.
It is the commitment to this epistemic process, rather than any specific outcome, that is the valid ticket onto the rational playing field.
At a time when political rhetoric is riven with irrationality, when knowledge is being seen less as a means of understanding the world and more as an encumbrance that can be pushed aside if it stands in the way of wishful thinking, and when authoritarian leaders are drawing ever larger crowds, epistemology needs to matter.
Peter Ellerton, Lecturer in Critical Thinking, Director of the UQ Critical Thinking Project, The University of Queensland
This article was originally published on The Conversation. (Reblogged by permission). Read the original article.
No, nanoparticles in baby formula will not harm your baby
Ian Musgrave, University of Adelaide
If you watched Channel 7 news this week, you would have learnt about a study commissioned by Friends of the Earth that found “potentially toxic” nanoparticles in Australian baby formula.
The study’s spokesperson said calcium phosphate nanoparticles (nano-hydroxyapatite, also known as nano-hydroxylapatite) caused kidney and liver damage. That claim was, how shall I put it kindly, just a little misleading.
I have the study the spokesperson mentioned before me, which was conducted in rats, not humans.
The researchers injected calcium phosphate nanoparticles directly into rats’ body cavities (instead of oral administration as happens with baby formula) at concentrations around a million times higher than found in the baby formula.
Let me quote from the study’s findings:
The normal levels of AST, ALT and A/G [liver enzymes indicating liver damage] in the n-HA [nano-hydroxyapatite] group suggested no inflammation and necrosis induced by accumulation of 100 mg of n-HA particles. In the liver function there was almost no damage. Moreover, no significant change on values of BUN and CR [urea and creatine] than the control, which also suggested n-HA has no effect on renal function.
In other words, there were no ill effects on liver or kidney function, the direct opposite of what the media reports were claiming.
Even if you injected 100 milligrams of pure nano-hydroxyapatite directly into a newborn baby’s body (equivalent in baby terms to the dose given to the rats) there would be no significant effect on liver or kidney function.
The spokesperson’s misleading message caused unwarranted concern. On a now deleted Sunrise Facebook post discussing this report, the commentors’ concern and fear was palpable. Causing unreasonable fear is irresponsible.
Nanoparticles occur naturally
Nanoparticles have become the latest bogeyman, despite nanoparticles occurring naturally. The media report that fuelled the controversy failed to put nanoparticles in their natural biological context, provide any significant support that particles detected in milk are engineered nanomaterials, nor provide evidence of harm for the levels found.
Infant formula is based on milk, which naturally contains calcium and phosphorus (as calcium phosphates). Milk is an important source of calcium, which forms the basis of bones and teeth. The calcium and phosphates are in a complex balance between soluble and protein-bound forms.
One of the forms of calcium phosphate in milk is hydroxyapatite (also found in tooth enamel). So it is unsurprising that hydroxyapatite is found in dried infant formula, which is mainly dried milk powder.
Nanometre-sized particles of calcium phosphate also form naturally in drying milk.
Other studies have found no effect
Researchers have studied the safety of consuming hydroxyapatite nanoparticles before.
Animals who ate the nanoparticles (added to their food, as opposed to having them injected) showed no toxicity at levels well above those present in milk (up to 100 milligrams per kilogram of body weight a day for a year).
Even if you inject them (into veins or into body cavities), you need levels well above those found in infant formulas to cause damage (50 milligrams nano-hydroxyapatite per kilogram body weight in rats).
To give you an idea of how much higher this is with respect to infant formula, the highest levels of hydroxyapatite nanoparticles in any formula is 287 particles in 10 grams of formula.
Yes, that’s particles not milligrams, not micrograms but actual particles. We are talking nano- to femtograms here, amounts so small it is hard to visualise. These levels are a million times or more less than levels found to have produced no effects in animals (and even lower than levels that do cause damage).
Hydroxyapatite nanoparticles have been widely developed to aid bone repair, deliver drugs and have been extensively tested. All results suggest that even levels required to be drug delivery agents, well above those found in baby formula, have no significant adverse effects.
The body dissolves the nanoparticles anyway
These nanoparticles will also not stay nanoparticles: they dissolve in the stomach fluids, allowing their calcium to be absorbed.
Newborns and very young babies’ stomach fluids are less acidic than older babies and young children (pH5), but still acidic enough to dissolve hydroxyapatite.
And particles are more easily dissolved the smaller they are. So, nanoparticles are likely to be even more rapidly dissolved into their component calcium and phosphate ions than larger particles.
What’s the take-home message?
There are no significant public health implications for these small crystals of naturally occurring calcium phosphates in milk-based baby formula.
The way the Friends of the Earth study results have been presented, with misleading references to irrelevant studies, has caused unnecessary fear and concern, and may lead some to abandon formula unnecessarily, with negative impacts on baby health.
Ian Musgrave, Senior lecturer in Pharmacology, University of Adelaide
This article was originally published on The Conversation. (Reblogged by permission). Read the original article.
Skepticism – philosophical or scientific?
by Tim Harding B.Sc., B.A.
(This essay is based on a talk presented to the Victorian Skeptics in January 2017. An edited version was published in The Skeptic magazine Vol.37, No.1, March 2017, under the title ‘I Think I Am’).
Dictionaries often draw a distinction between the modern common meaning of skepticism, and its traditional philosophical meaning, which dates from antiquity. The usual common dictionary definition is ‘a sceptical attitude; doubt as to the truth of something’; whereas the philosophical definition is ‘the theory that some or all types of knowledge are impossible’. These definitions are of course quite different, and reflect the fact that the meanings of philosophical terms have drifted over the millennia. The contemporary meaning of ‘scientific skepticism’ is different again, which I shall talk about later.
I should say at the outset that whilst I have a foot in both the scientific and philosophical camps, and although I will be writing here mainly about the less familiar philosophical skepticism, I personally support scientific skepticism over philosophical skepticism, for reasons I shall later explain.
But why are these definitions of skepticism important? And why do we spell it with a ‘k’ instead of a ‘c’? As an admin of a large online skeptics group (Skeptics in Australia), I am often asked such questions, so I have done a bit of investigating.
As to the first question, one of the main definitional issues I have faced is the difference between skepticism and what I call denialism. Some skeptical newbies typically do a limited amount of googling, and what they often come up with is the common dictionary definition of skepticism, rather than the lesser known scientific skepticism definition that we Australian skeptics use. They tend to think that ‘scepticism’ (with a ‘c’) entails doubting or being skeptical of everything, including science, medicine, vaccination, biotechnology, moon landings, 9/11 etc, etc. When we scientific skeptics express a contrary view, we are sometimes then accused of ‘not being real sceptics’. So I think that definitions are important.
In my view, denialism is a person’s choice to deny certain particular facts. It is an essentially irrational belief where the person substitutes his or her personal opinion for established knowledge. Science denialism is the rejection of basic facts and concepts that are undisputed, well-supported parts of the scientific consensus on a subject, in favour of radical and controversial opinions of an unscientific nature. Most real skeptics accept the findings of peer-reviewed science published in reputable scientific journals, at least for the time being, unless and until it is corrected by the scientific community.
Denialism can then give rise to conspiracy theories, as a way of trying to explain the discrepancy between scientific facts and personal opinions. Here is the typical form of what I call the Scientific Conspiracy Fallacy:
Premise 1: I hold a certain belief.
Premise 2: The scientific evidence is inconsistent with my belief.
Conclusion: Therefore, the scientists are conspiring with the Big Bad Government/CIA/NASA/Big Pharma (choose whichever is convenient) to fake the evidence and undermine my belief.
It is a tall order to argue that the whole of science is genuinely mistaken. That is a debate that even the conspiracy theorists know they probably can’t win. So the most convenient explanation for the inconsistency is that scientists are engaged in a conspiracy to fake the evidence in specific cases.
Ancient Greek Skepticism
The word ‘skeptic’ originates from the early Greek skeptikos, meaning ‘inquiring, reflective’.
The Hellenistic period covers the period of Greek and Mediterranean history between the death of Alexander the Great in 323 BCE and the Roman victory over Greeks at the Battle of Corinth in 146 BCE. The beginning of this period also coincides with the death of the great philosopher, logician and scientist Aristotle of Stagira (384–322 BCE).
As he had no adult heir, Alexander’s empire was divided between the families of three of his generals. This resulted in political conflicts and civil wars, in which prominent philosophers and other intellectuals did not want to take sides, in the interests of self-preservation. So they retreated from public life into various cloistered schools of philosophy, the main ones being the Stoics, the Epicureans, the Cynics and the Skeptics.
As I mentioned earlier, the meanings of such philosophical terms have altered over 2000 years. These philosophical schools had different theories as to how to attain eudaimonia, which roughly translates as the highest human good, or the fulfilment of human life. They thought that the key to eudaimonia was to live in accordance with Nature, but they had different views as to how to achieve this.
In a nutshell, the Stoics advocated the development of self-control and fortitude as a means of overcoming destructive emotions. The Epicureans regarded absence of pain and suffering as the source of happiness (not just hedonistic pleasure). The Cynics (which means ‘dog like’) rejected conventional desires for wealth, power, health, or fame, and lived a simple life free from possessions. Lastly, there were the Skeptics, whom I will now discuss in more detail.
During this Hellenistic period, there were actually two philosophical varieties of skepticism – the Academic Skeptics and the Pyrrhonist Skeptics.
In 266BCE, Arcesilaus became head of Platonic Academy. The Academic Skeptics did not doubt the existence of truth in itself, only our capacities for obtaining it. They went as far as thinking that knowledge is impossible – nothing can be known at all. A later head of the Academy, Carneades modified this rather extreme position into thinking that ideas or notions are never true, but only probable. He thought there are degrees of probability, hence degrees of belief, leading to degrees of justification for action. Academic Skepticism did not really catch on, and largely died out in the first century CE, with isolated attempts at revival from time to time.
The founder of Pyrrhonist Skepticism, Pyrrho of Elis (c.365-c.275BCE) was born in Elis on west side of the Peloponnesian Peninsula (near Olympia). Pyrrho travelled with Alexander the Great on his exploration of the East. He encountered the Magi in Persia and even went as far as the Gymnosophists in India, who were naked ascetic gurus – not exactly a good image for modern skepticism.
Pyrrho differed from the Academic Skeptics in thinking nothing can be known for certain. He thought that their position ‘nothing can be known at all’ was dogmatic and self-contradictory, because it itself is a claim of certainty. Pyrrho thought that the senses are easily fooled, and reason follows too easily our desires. Therefore we should withhold assent from non-evident propositions and remain in a state of perpetual inquiry about them. This means that we are not necessarily skeptical of ‘evident propositions’, and that at least some knowledge is possible. This position is closer to modern skepticism than Academic Skepticism. Indeed, Pyrrhonism became a synonym for skepticism in the 17th century CE; but we are not quite there yet.
Sextus Empiricus (c. 160 – c. 210 CE) was a Greco-Roman philosopher who promoted Pyrrhonian skepticism. It is thought that the word ‘empirical’ comes from his name; although the Greek word empeiria also means ‘experience’. Sextus Empiricus first questioned the validity of inductive reasoning, positing that a universal rule could not be established from an incomplete set of particular instances, thus presaging David Hume’s ‘problem of induction’ about 1500 years later.
Skeptic with a ‘k’
The Romans were great inventors and engineers, but they are not renowned for science or skepticism. On the contrary, they are better known for being superstitious; for instance, the Roman Senate sat only on ‘auspicious days’ thought to be favoured by the gods. They had lots of pseudoscientific beliefs that we skeptics would now regard as quackery or woo. For example, they thought that cabbage was a cure for many illnesses; and in around 78CE, the Roman author Pliny the Elder wrote: ‘I find that a bad cold in the head clears up if the sufferer kisses a mule on the nose’.
So I cannot see any valid historical reason for us to switch from the early Greek spelling of ‘skeptic’ to the Romanised ‘sceptic’. Yes, I know that ‘skeptic’ is the American spelling and ‘sceptic’ is the British spelling, but I don’t think that alters anything. The most likely explanation is that the Americans adopted the spelling of the early Greeks and the British adopted that of the Romans.
Modern philosophical skepticism
Somewhat counter intuitively, the term ‘modern philosophy’ is used to distinguish more recent philosophy from the ancient philosophy of the early Greeks and the medieval philosophy of the Christian scholastics. Thus ‘modern philosophy’ dates from the Renaissance of the 14th to the 17th centuries, although precisely when modern philosophy started within the Renaissance period is a matter of some scholarly dispute.
The defining feature of modern philosophical skepticism is the questioning the validity of some or all types of knowledge. So before going any further, we need to define knowledge.
The branch of philosophy dealing with the study of knowledge is called ‘epistemology’. The ancient philosopher Plato famously defined knowledge as ‘justified true belief’, as illustrated by the Venn diagram below. According to this definition, it is not sufficient that a belief is true to qualify as knowledge – a belief based on faith or even just a guess could happen to be true by mere coincidence. So we need adequate justification of the truth of the belief for it to become knowledge. Although there are a few exceptions, known as ‘Gettier problems’, this definition of knowledge is still largely accepted by modern philosophers, and will do for our purposes here. (Epistemology is mainly about the justification of true beliefs rather than this basic definition of knowledge).
There are also different types of knowledge that are relevant to this discussion.
A priori knowledge is knowledge that is known independently of experience. For instance, we know that ‘all crows are birds’ without having to conduct an empirical survey of crows to investigate how many are birds and whether there are any crows that are not birds. Crows are birds by definition – it is just impossible for there to be an animal that is a crow but is not a bird.
On the other hand, a posteriori knowledge is knowledge that is known by experience. For instance, we only know that ‘all crows are black’ from empirical observations of crows. It is not impossible that there is a crow that is not black, for example as a result of some genetic mutation.
The above distinction illustrates how not all knowledge needs to be empirical. Indeed, one of the earliest modern philosophers and skeptics, Rene Descartes (1596-1650) was a French mathematician, scientist and philosopher. (His name is where the mathematical word ‘Cartesian’ comes from). These three interests of his were interrelated, in the sense that he had a mathematical and scientific approach to his philosophy. Mathematics ‘delighted him because of its certainty and clarity’. His fundamental aim was to attain philosophical truth by the use of reason and logical methods alone. For him, the only kind of knowledge was that of which he could be certain. His ideal of philosophy was to discover hitherto uncertain truths implied by more fundamental certain truths, in a similar manner to mathematical proofs.
Using this approach, Descartes engaged in a series of meditations to find a foundational truth of which he could be certain, and then to build on that foundation a body of implied knowledge of which he could also be certain. He did this in a methodical way by first withholding assent from opinions which are not completely certain, that is, where there is at least some reason for doubt, such as those acquired from the senses. Descartes concludes that one proposition of which he can be certain is ‘Cogito, ergo sum’ (which means ‘I think, therefore I exist’).
In contrast to Descartes, a different type of philosophical skeptic David Hume (1711-1776) held all human knowledge is ultimately founded solely in ‘experience’. In what has become known as ‘Hume’s fork’, he held that statements are divided up into two types: statements about ideas are necessary statements that are knowable a priori; and statements about the world, which are contingent and knowable a posteriori.
In modern philosophical terminology, members of the first group are known as analytic propositions and members of the latter as synthetic propositions. Into the first class fall statements such as ‘2 + 2 = 4’, ‘all bachelors are unmarried’, and truths of mathematics and logic. Into the second class fall statements like ‘the sun rises in the morning’, and ‘the Earth has precisely one moon’.
Hume tried to prove that certainty does not exist in science. First, Hume notes that statements of the second type can never be entirely certain, due to the fallibility of our senses, the possibility of deception (for example, the modern ‘brain in a vat’ hypothesis) and other arguments made by philosophical skeptics. It is always logically possible that any given statement about the world is false – hence the need for doubt and skepticism.
Hume formulated the ‘problem of induction’, which is the skeptical question of whether inductive reasoning leads to knowledge understood in the classic philosophical sense. This problem focuses on the alleged lack of justification for generalising about the properties of a class of objects based on some number of observations of particular instances of that class (for example, the inference that ‘all swans we have seen are white, and therefore, all swans are white’, before the discovery of black swans in Western Australia).
Immanuel Kant (1724-1804) was (and still is) a major philosophical figure who tried to show the way beyond the impasse which modern philosophy had led to between rationalists such as Descartes and empiricists such as Hume. Kant is widely held to have synthesised these two early modern philosophical traditions. And yet he was also a skeptic, albeit of a different variety. Kant thought that only knowledge gained from empirical science is legitimate, which is a forerunner of modern scientific skepticism. He thought that metaphysics was illegitimate and largely speculative; and in that sense he was a philosophical skeptic.
In 1924, the Spanish philosopher Miguel de Unamuno disputed the common dictionary definition of skepticism. He argued that ‘skeptic does not mean him who doubts, but him who investigates or researches as opposed to him who asserts and thinks that he has found’. Sounds familiar, doesn’t it?
Modern scientific skepticism is different from philosophical skepticism, and yet to some extent was influenced by the ideas of Pyrrho of Elis, David Hume, Immanuel Kant and Miguel de Unamuno.
Most skeptics in the English-speaking world see the 1976 formation of the Committee for the Scientific Investigation of Claims of the Paranormal (CSICOP) in the United States as the ‘birth of modern skepticism’. (CSICOP is now called the Committee for Skeptical Inquiry – CSI). However, CSICOP founder and philosophy professor Paul Kurtz has said that he actually modelled it after the Belgian Comité Para of 1949. The Comité Para was partly formed as a response to a predatory industry of bogus psychics who were exploiting the grieving relatives of people who had gone missing during the Second World War.
Kurtz recommended that CSICOP focus on testable paranormal and pseudoscientific claims and to leave religious aspects to others. CSICOP popularised the usage of the terms ‘skeptic’, ‘skeptical’ and ‘skepticism’ by its magazine, Skeptical Inquirer, and directly inspired the foundation of many other skeptical organizations throughout the world, including the Australian Skeptics in 1980.
Through the public activism of groups such as CSICOP and the Australian Skeptics, the term ‘scientific skepticism’ has come to symbolise an activist movement as well as a type of applied philosophy.
There are several definitions of scientific skepticism, but the two that I think are most apt are those by the Canadian skeptic Daniel Loxton and the American skeptic Steven Novella.
Daniel Loxton’s definition is ‘the practice or project of studying paranormal and pseudoscientific claims through the lens of science and critical scholarship, and then sharing the results with the public.’
Steven Novella’s definition is ‘scientific skepticism is the application of skeptical philosophy, critical thinking skills, and knowledge of science and its methods to empirical claims, while remaining agnostic or neutral to non-empirical claims (except those that directly impact the practice of science).’ By this exception, I think he means religious beliefs that conflict with science, such as creationism or opposition to stem cell research.
In other words, scientific skeptics maintain that empirical investigation of reality leads to the truth, and that the scientific method is best suited to this purpose. Scientific skeptics attempt to evaluate claims based on verifiability and falsifiability and discourage accepting claims on faith or anecdotal evidence. This is different to philosophical skepticism, although inspired by it.
Descartes, R. (1641) Meditations on First Philosophy: With Selections from the Objections and Replies, trans. and ed. John Cottingham, Cambridge: Cambridge University Press.
Hume, David.(1748) An Enquiry Concerning Human Understanding . Gutenberg Press.
Kant, Immanuel (1787) Critique of Pure Reason 2nd edition. Cambridge: Cambridge University Press.
Loxton, Daniel. (2013) Why Is There a Skeptical Movement? (PDF). Retrieved 12 January 2017.
Novella, Steven (15 February 2013). ‘Scientific Skepticism, Rationalism, and Secularism’. Neurologica (blog). Retrieved 12 February 2017.
Russell, Bertrand. (1961) History of Western Philosophy. 2nd edition London: George Allen & Unwin.
Unamuno, Miguel de., (1924) Essays and soliloquies London: Harrap.
If you find the information on this blog useful, you might like to consider supporting us.
Filed under Essays and talks
Australian Skeptic’s Guide to Numerology – Update
You may have noticed that we’ve been updating and recycling our “Australian Skeptics Guide to….” features. Usually we wait five years or more between reposts, but so much has happened recently that an update of the January 2016 update of An Australian Skeptic’s Guide to Numerology seems like a good idea.
It takes into account the election of Donald Trump, the demotion of Sussan Ley, and the enthusiam for numerology displayed by Bollywood celebrities. Scroll down to January 2016, or go HERE
The Fallacy of Faulty Risk Assessment
by Tim Harding
(An edited version of this essay was published in The Skeptic magazine, September 2016, Vol 36 No 3)
Australian Skeptics have tackled many false beliefs over the years, often in co-operation with other organisations. We have had some successes – for instance, belief in homeopathy finally seems to be on the wane. Nevertheless, false beliefs about vaccination and fluoridation just won’t lie down and die – despite concerted campaigns by medical practitioners, dentists, governments and more recently the media. Why are these beliefs so immune to evidence and arguments?
There are several possible explanations for the persistence of these false beliefs. One is denialism – the rejection of established facts in favour of personal opinions. Closely related are conspiracy theories, which typically allege that facts have been suppressed or fabricated by ‘the powers that be’, in an attempt by denialists to explain the discrepancies between their opinions and the findings of science. A third possibility is an error of reasoning or fallacy known as Faulty Risk Assessment, which is the topic of this article.
Before going on to discuss vaccination and fluoridation in terms of this fallacy, I would like to talk about risk and risk assessment in general.
What is risk assessment?
Hardly anything we do in life is risk-free. Whenever we travel in a car or even walk along a footpath, most people are aware that there is a small but finite risk of being injured or killed. Yet this risk does not keep us away from roads. We intuitively make an informal risk assessment that the level of this risk is acceptable in the circumstances.
In more formal terms, ‘risk’ may be defined as the probability or likelihood of something bad happening multiplied by the resulting cost/benefit ratio if it does happen. Risk analysis is the process of discovering what risks are associated with a particular hazard, including the mechanisms that cause the hazard, then estimating the likelihood that the hazard will occur and the consequences if it does occur.
Risk assessment is the determination of the acceptability of risk using two dimensions of measurement – the likelihood of an adverse event occurring; and the severity of the consequences if it does occur, as illustrated in the diagram below. (This two-dimensional risk assessment is a conceptually useful way of ranking risks, even if one or both of the dimensions cannot be measured quantitatively).
By way of illustration, the likelihood of something bad happening could be very low, but the consequences could be unacceptably high – enough to justify preventative action. Conversely, the likelihood of an event could be higher, but the consequences could low enough to justify ‘taking the risk’.
In assessing the consequences, consideration needs to be given to the size of the population likely to be affected, and the severity of the impact on those affected. This will provide an indication of the aggregate effect of an adverse event. For example, ‘high’ consequences might include significant harm to a small group of affected individuals, or moderate harm to a large number of individuals.
A fallacy is committed when a person either focuses on the risks of an activity and ignores its benefits; and/or takes account one dimension of risk assessment and overlooks the other dimension.
To give a practical example of a one-dimensional risk assessment, the desalination plant to augment Melbourne’s water supply has been called a ‘white elephant’ by some people, because it has not been needed since the last drought broke in March 2010. But this criticism ignores the catastrophic consequences that could have occurred had the drought not broken. In June 2009, Melbourne’s water storages fell to 25.5% of capacity, the lowest level since the huge Thomson Dam began filling in 1984. This downward trend could have continued at that time, and could well be repeated during the inevitable next drought.
Melbourne’s desalination plant at Wonthaggi
No responsible government could afford to ‘take the risk’ of a major city of more than four million people running out of water. People in temperate climates can survive without electricity or gas, but are likely to die of thirst in less than a week without water, not to mention the hygiene crisis that would occur without washing or toilet flushing. The failure to safeguard the water supply of a major city is one of the most serious derelictions of government responsibility imaginable.
Turning now to the anti-vaccination and anti-fluoridation movements, they both commit the fallacy of Faulty Risk Assessment. They focus on the very tiny likelihood of adverse side effects without considering the major benefits to public health from vaccination and the fluoridation of public water supplies, and the potentially severe consequences of not vaccinating or fluoridating.
The benefits of vaccination far outweigh its risks for all of the diseases where vaccines are available. This includes influenza, pertussis (whooping cough), measles and tetanus – not to mention the terrible diseases that vaccination has eradicated from Australia such as smallpox, polio, diphtheria and tuberculosis.
As fellow skeptic Dr. Rachael Dunlop puts it: ‘In many ways, vaccines are a victim of their own success, leading us to forget just how debilitating preventable diseases can be – not seeing kids in calipers or hospital wards full of iron lungs means we forget just how serious these diseases can be.’
No adult or teenager has ever died or become seriously ill in Australia from the side effects of vaccination; yet large numbers of people have died from the lack of vaccination. The notorious Wakefield allegation in 1998 of a link between vaccination and autism has been discredited, retracted and found to be fraudulent. Further evidence comes from a recently published exhaustive review examining 12,000 research articles covering eight different vaccines which also concluded there is no link between vaccines and autism.
According to Professor C Raina MacIntyre of UNSW, ‘Influenza virus is a serious infection, which causes 1,500 to 3,500 deaths in Australia each year. Death occurs from direct viral effects (such as viral pneumonia) or from complications such as bacterial pneumonia and other secondary bacterial infections. In people with underlying coronary artery disease, influenza may also precipitate heart attacks, which flu vaccine may prevent.’
In 2010, increased rates of high fever and febrile convulsions were reported in children under 5 years of age after they were vaccinated with the Fluvax vaccine. This vaccine has not been registered for use in this age group since late 2010 and therefore should not be given to children under 5 years of age. The available data indicate that there is a very low risk of fever, which is usually mild and transient, following vaccination with the other vaccine brands. Any of these other vaccines can be used in children aged 6 months and older.
Australia was declared measles-free in 2005 by the World Health Organization (WHO) – before we stopped being so vigilant about vaccinating and outbreaks began to reappear. The impact of vaccine complacency can be observed in the 2015 measles epidemic in Wales where there were over 800 cases and one death, and many people presenting were of the age who missed out on MMR vaccination following the Wakefield scare.
After the link to autism was disproven, many anti-vaxers shifted the blame to thiomersal, a mercury-containing component of relatively low toxicity to humans. Small amounts of thiomersal were used as a preservative in some vaccines, but not the MMR vaccine. Thiomersal was removed from all scheduled childhood vaccines in 2000.
In terms of risk assessment, Dr. Dunlop has pointed out that no vaccine is 100% effective and vaccines are not an absolute guarantee against infection. So while it’s still possible to get the disease you’ve been vaccinated against, disease severity and duration will be reduced. Those who are vaccinated have fewer complications than people who aren’t. With pertussis (whooping cough), for example, severe complications such as pneumonia and encephalitis (brain inflammation) occur almost exclusively in the unvaccinated. So since the majority of the population is vaccinated, it follows that most people who get a particular disease will be vaccinated, but critically, they will suffer fewer complications and long-term effects than those who are completely unprotected.
Public water ﬂuoridation is the adjustment of the natural levels of ﬂuoride in drinking water to a level that helps protect teeth against decay. In many (but not all) parts of Australia, reticulated drinking water has been fluoridated since the early 1960s.
The benefits of fluoridation are well documented. In November 2007, the NHMRC completed a review of the latest scientific evidence in relation to ﬂuoride and health. Based on this review, the NHMRC recommended community water fluoridation programs as the most effective and socially equitable community measure for protecting the population from tooth decay. The scientific and medical support for the benefits of fluoridation certainly outweighs the claims of the vocal minority against it.
Fluoridation opponents over the years have claimed that putting fluoride in water causes health problems, is too expensive and is a form of mass medication. Some conspiracy theorists go as far as to suggest that fluoridation is a communist plot to lower children’s IQ. Yet, there is no evidence of any adverse health effects from the fluoridation of water at the recommended levels. The only possible risk is from over-dosing water supplies as a result of automated equipment failure, but there is inline testing of fluoride levels with automated water shutoffs in the remote event of overdosing. Any overdose would need to be massive to have any adverse effect on health. The probability of such a massive overdose is extremely low.
Tooth decay remains a signiﬁcant problem. In Victoria, for instance, more than 4,400 children under 10, including 197 two-year-olds and 828 four-year-olds, required general anaesthetic in hospital for the treatment of dental decay during 2009-10. Indeed, 95% of all preventable dental admissions to hospital for children up to nine years old in Victoria are due to dental decay. Children under ten in non-optimally ﬂuoridated areas are twice as likely to require a general anaesthetic for treatment of dental decay as children in optimally ﬂuoridated areas.
As fellow skeptic and pain management specialist Dr. Michael Vagg has said, “The risks of general anaesthesia for multiple tooth extractions are not to be idly contemplated for children, and far outweigh the virtually non-existent risk from fluoridation.” So in terms of risk assessment, the risks from not fluoridating water supplies are far greater than the risks of fluoridating.
Implications for skeptical activism
Anti-vaxers and anti-fluoridationists who are motivated by denialism and conspiracy theories tend to believe whatever they want to believe, and dogmatically so. Thus evidence and arguments are unlikely to have much influence on them.
But not all anti-vaxxers and anti-fluoridationists fall into this category. Some may have been misled by false information, and thus could possibly be open to persuasion if the correct information is provided.
Others might even be aware of the correct information, but are assessing the risks fallaciously in the ways I have described in this article. Their errors are not ones of fact, but errors of reasoning. They too might be open to persuasion if education about sound risk assessment is provided.
I hope that analysing the false beliefs about vaccination and fluoridation from the perspective of the Faulty Risk Assessment Fallacy has provided yet another weapon in the skeptical armoury against these false beliefs.
Rachael Dunlop (2015) Six myths about vaccination – and why they’re wrong. The Conversation, Parkville.
C Raina MacIntyre (2016) Thinking about getting the 2016 flu vaccine? Here’s what you need to know. The Conversation, Parkville.
Mike Morgan (2012) How fluoride in water helps prevent tooth decay. The Conversation, Parkville.
Michael Vagg (2013) Fluoride conspiracies + activism = harm to children. The Conversation, Parkville.
Government of Victoria (2014) Victorian Guide to Regulation. Department of Treasury and Finance, Melbourne.
If you find the information on this blog useful, you might like to consider supporting us.
Filed under Essays and talks
A Skeptic’s Guide to Dowsing
This article first appeared as a Vic Skeptics discussion pamphlet and again here in 2010.
The full range of our discussion pamphlets can be downloaded from our USEFUL INFO page.
Dowsing, (also known as Divining) is widely practised in Australia. Dowsers claim the ability to detect useful substances in the ground using processes which are not able to be explained by current scientific principles.
The most frequently dowsed substance in drought-prone Australia is water. Many Australians can claim a friend or relative who is a water-diviner.
Australian Skeptics have long been interested in dowsing. It clearly lies within the range of paranormal activities which come under scrutiny. Australian Skeptics offer a sum of money, (currently $100,000) to anyone who can demonstrate paranormal ability of any kind. Our only stipulation is that candidates must pass a proper scientific test, the protocols of which have been agreed upon by all parties before…
View original post 325 more words
A Skeptic’s Guide to Astrology
This is an edited repost of an article which first appeared here in August 2010. You can also download a similar classroom discussion pamphlet (and a lot more) from our USEFUL INFO page.
The basic proposition of Western Astrology is that your personality and fate are influenced by the apparent positions and motions of heavenly bodies.
View original post 937 more words
A Skeptic’s Guide to Conspiracy Theories
This article first appeared as a Vic Skeptics discussion pamphlet.
The full range of our discussion pamphlets (and a lot more) can be downloaded from our USEFUL INFO page.
A Skeptic’s Guide to
by Peter Barrett, Canberra Skeptics (2016 edit by Ken Greatorex)
Test 1: Is the argument factually correct?
It’s remarkable how many conspiracy theories are based on arguments which are simply factually incorrect. If you’re presented with a conspiracy theory argument, check the facts.
[Sites such as
are useful here.]
Many incorrect arguments are repeated in ignorance. But there are also some people who knowingly repeat conspiracy arguments they know are wrong.
View original post 618 more words