Category Archives: Essays and talks

Essays and talks written by Tim Harding

Skepticism, Science and Scientism

By Tim Harding B.Sc.

(An edited version of this essay was published in The Skeptic magazine,
September 2017, Vol 37 No 3)

In these challenging times of ‘alternative facts’ and anti-science attitudes, it may sound strange to be warning against excessive scientific exuberance.  Yet to help defend science from these attacks, I think we need to encourage science to maintain its credibility amongst non-scientists.

In my last article for The Skeptic (‘I Think I Am’, March 2017), I traced the long history of skepticism over the millennia.  I talked about the philosophical skepticism of Classical Greece, the skepticism of Modern Philosophy dating from Descartes, through to the contemporary form of scientific skepticism that our international skeptical movement now largely endorses.  I quoted Dr. Steven Novella’s definition of scientific skepticism as ‘the application of skeptical philosophy, critical thinking skills, and knowledge of science and its methods to empirical claims, while remaining agnostic or neutral to non-empirical claims (except those that directly impact the practice of science).’

Despite the recent growth of various anti-science movements, science is still widely regarded as the ‘gold standard’ for the discovery of empirical knowledge, that is, knowledge derived from observations and experiments.  Even theoretical physics is supposed to be empirically verifiable in principle when the necessary technology becomes available, as in the case of the Higgs boson and Einstein’s gravitational waves.  But empirical observations are not our only source of knowledge – we also use reasoning to make sense of our observations and to draw valid conclusions from them.  We can even generate new knowledge through the application of reasoning to what we already know, as I shall discuss later.

Most skeptics (with a ‘k’) see science as a kind of rational antidote to the irrationality of pseudoscience, quackery and other varieties of woo.  So we naturally tend to support and promote science for this purpose.  But sometimes we can go too far in our enthusiasm for science.  We can mistakenly attempt to extend the scope of science beyond its empirical capabilities, into other fields of inquiry such as philosophy and politics – even ethics.  If only a small number of celebrity scientists lessen their credibility by making pronouncements beyond their individual fields of expertise, they render themselves vulnerable to attack by our opponents who are looking for any weaknesses in their arguments.  In doing so, they can unintentionally undermine public confidence in science, and by extension, scientific skepticism.

The pitfalls of crude positivism

Logical positivism (sometimes called ‘logical empiricism’) was a Western philosophical movement in the first half of the 20th century with a central thesis of verificationism; which was a theory of knowledge which asserted that only propositions verifiable through empirical observation are meaningful.

One of the most prominent proponents of logical positivism was Professor Sir Alfred Ayer (1910-1989) pictured below.  Ayer is best known for popularising the verification principle, in particular through his presentation of it in his bestselling 1936 book Language, Truth, and Logic.  Ayer’s thesis was that a proposition can only be meaningful if it has verifiable empirical content, otherwise it is either a priori (known by deduction) or nonsensical.  Ayer’s philosophical ideas were deeply influenced by those of the Vienna Circle and the 18th century empiricist philosopher David Hume.

James Fodor, who is a young Melbourne science student, secularist and skeptic has critiqued a relatively primitive form of logical positivism, which he calls ‘crude positivism’.  He describes this as a family of related and overlapping viewpoints, rather than a single well-defined doctrine, the three most commonly-encountered components of which are the following:

(1) Strict evidentialism: the ultimate arbiter of knowledge is evidence, which should determine our beliefs in a fundamental and straightforward way; namely that we believe things if and only if there is sufficient evidence for them.

(2) Narrow scientism: the highest, or perhaps only, legitimate form of objective knowledge is that produced by the natural sciences. The social sciences, along with non-scientific pursuits, either do not produce real knowledge, or only knowledge of a distinctly inferior sort.

(3) Pragmatism: science owes its special status to its unique ability to deliver concrete, practical results: it ‘works’.  Philosophy, theology, and other such fields of inquiry do not produce ‘results’ in this same way, and thus have no special status.

Somewhat controversially, Fodor classifies Richard Dawkins, Sam Harris, Peter Boghossian, Neil de Grasse Tyson, Lawrence Krauss, and Stephen Hawking as exponents of crude positivism when they stray outside their respective fields of scientific expertise into other fields such as philosophy and social commentary.  (Although to be fair, Lawrence Krauss wrote an apology in a 2012 issue of Scientific American, for seemingly dismissing the importance of philosophy in a previous interview he gave to The Atlantic).

Fodor’s component (1) is a relatively uncontroversial viewpoint shared by most scientists and skeptics.  Nevertheless, Fodor cautions that crude positivists often speak as if evidence is self-interpreting, such that a given piece of evidence automatically picks out one singular state of affairs over all other possibilities.  In practice, however, this is almost never the case because the interpretation of evidence nearly always requires an elaborate network of background knowledge and pre-existing theory.  For instance, the raw data from most scientific observations or experiments are unintelligible without the use of background scientific theories and methodologies.

It is Fodor’s components (2) and (3) that are likely to be more controversial, and so I will now discuss them in more detail.

The folly of scientism

What is ‘scientism’ – and how is it different from the natural enthusiasm for science that most skeptics share?  Unlike logical positivism, scientism is not a serious intellectual movement.  The term is almost never used by its exponents to describe themselves.  Instead, the word scientism is mainly used pejoratively when criticising scientists for attempting to extend the boundaries of science beyond empiricism.

Warwick University philosopher Prof. Tom Sorell has defined scientism as: ‘a matter of putting too high a value on natural science in comparison with other branches of learning or culture.’  In summary, a commitment to one or more of the following statements lays one open to the charge of scientism:

  • The natural sciences are more important than the humanities for an understanding of the world in which we live, or even all we need to understand it;
  • Only a scientific methodology is intellectually acceptable. Therefore if the humanities are to be a genuine part of human knowledge they must adopt it; and
  • Philosophical problems are scientific problems and should only be dealt with as such.

At the 2016 Australian Skeptics National Convention, former President of Australian Skeptics Inc., Peter Bowditch, criticized a recent video made by TV science communicator Bill Nye in which he responded to a student asking him: ‘Is philosophy meaningless?’  In his rambling answer, Nye confused questions of consciousness and reality, opined that philosophy was irrelevant to answering such questions, and suggested that our own senses are more reliable than philosophy.  Peter Bowditch observed that ‘the problem with his [Nye’s] comments was not that they were just wrong about philosophy; they were fractally wrong.  Nye didn’t know what he was talking about. His concept of philosophy was extremely naïve.’  Bill Nye’s embarrassing blunder is perhaps ‘low hanging fruit’; and after trenchant criticism, Nye realised his error and began reading about philosophy for the first time.

Some distinguished scientists (not just philosophers) are becoming concerned about the pernicious influence of scientism.  Biological sciences professor Austin Hughes (1949-2015) wrote ‘the temptation to overreach, however, seems increasingly indulged today in discussions about science. Both in the work of professional philosophers and in popular writings by natural scientists, it is frequently claimed that natural science does or soon will constitute the entire domain of truth. And this attitude is becoming more widespread among scientists themselves. All too many of my contemporaries in science have accepted without question the hype that suggests that an advanced degree in some area of natural science confers the ability to pontificate wisely on any and all subjects.’

Prof. Hughes notes that advocates of scientism today claim the sole mantle of rationality, frequently equating science with reason itself.  Yet it seems the very antithesis of reason to insist that science can do what it cannot, or even that it has done what it demonstrably has not.  He writes ‘as a scientist, I would never deny that scientific discoveries can have important implications for metaphysics, epistemology, and ethics, and that everyone interested in these topics needs to be scientifically literate. But the claim that science and science alone can answer longstanding questions in these fields gives rise to countless problems.’

Limitations of science

The editor of the philosophical journal Think and author of The Philosophy Gym, Prof. Stephen Law has identified two kinds of questions to which it is very widely supposed that science cannot supply answers:

Firstly, philosophical questions are for the most part conceptual, rather than scientific or empirical.  They are usually answered by the use of reasoning rather than empirical observations.  For example, Galileo conducted a famous thought experiment by reason alone.  Imagine two objects, one light and one heavier than the other one, are connected to each other by a string.  Drop these linked objects from the top of a tower.  If we assume heavier objects do indeed fall faster than lighter ones (and conversely, lighter objects fall slower), the string will soon pull taut as the lighter object retards the fall of the heavier object.  But the linked objects together are heavier than the heavy object alone, and therefore should fall faster. This logical contradiction leads one to conclude the assumption about heavier objects falling faster is false.  Galileo figured this conclusion out in his head, without the assistance of any empirical experiment or observation.  In doing so, he was employing philosophical rather than scientific methods.

Secondly, moral questions are about what we ought or ought not to do.  In contrast, the empirical sciences, on their own, appear capable of establishing only what is the case.  This is known as the ‘is/ought gap’. Science can provide us with factual evidence that might influence our ethical judgements but it cannot provide us with the necessary ethical values or principles.  For example, science can tell us how to build nuclear weapons, but it cannot tell us whether or not they should ever be used and under what circumstances.  Clinical trials are conducted in medical science, often using treatment groups versus control groups of patients.  It is bioethics rather than science that provides us with the moral principles for obtaining informed patient consent for participation in such clinical trials, especially when we consider that control groups of patients are being denied treatments that could be to their benefit.

I have given the above examples not to criticise science in any way, but simply to point out that science has limitations, and that there is a place for other fields of inquiry in addition to science.

Is pragmatism enough?

Coming back to Fodor’s component (3) of crude positivism, he makes a good point that a scientific explanation that ‘works’ is not necessarily true.  For instance, Claudius Ptolemy of Alexandria (c. 90CE – c. 168CE) explained how to predict the behavior of the planets by introducing ad hoc notions of the deferent, equant and epicycles to the geocentric model of what is now known as our solar system.  This model was completely wrong, yet it produced accurate predictions of the motions of the planets – it ‘worked’.  Another example was Gregor Mendel’s 19th century genetic experiments on wrinkled peas.  These empirical experiments adequately explained the observed phenomena of genetic variation without even knowing what genes were or where they were located in living organisms.

Ptolemy model

Schematic diagram of Ptolemy’s incorrect geocentric model of the cosmos

James Fodor argues that just because scientific theories can be used to make accurate predictions, this does not necessarily mean that science alone always provides us with accurate descriptions of reality.  There is even a philosophical theory known as scientific instrumentalism, which holds that as long as a scientific theory makes accurate predictions, it does not really matter whether the theory corresponds to reality.  The psychology of perception and the philosophies of mind and metaphysics could also be relevant.  Fodor adds that many of the examples of science ‘delivering results’ are really applications of engineering and technology, rather than the discovery process of science itself.

Fodor concludes that if the key to the success of the natural sciences is adherence to rational methodologies and inferences, then it is those successful methods that we should focus on championing, whatever discipline they may be applied in, rather than the data sets collected in particular sciences.

Implications for science and skepticism

Physicist Ian Hutchison writes ‘the health of science is in fact jeopardised by scientism, not promoted by it.  At the very least, scientism provokes a defensive, immunological, aggressive response in other intellectual communities, in return for its own arrogance and intellectual bullyism.  It taints science itself by association’.  Hutchinson suggests that perhaps what the public is rejecting is not actually science itself, but a worldview that closely aligns itself with science — scientism.  By disentangling these two concepts, we have a much better chance for enlisting public support for scientific research.

The late Prof. Austin Hughes left us with a prescient warning that continued insistence on the universal and exclusive competence of science will serve only to undermine the credibility of science as a whole. The ultimate outcome will be an increase in science denialism that questions the ability of science to address even the questions legitimately within its sphere of competence.

References

Ayer, Alfred. J. (1936), Language Truth and Logic, London: Penguin.

Bowditch, Peter ‘Is Philosophy Dead?’ Australasian Science July/August 2017.

Fodor, James ‘Not so simple’, Australian Rationalist, v. 103, December 2016, pp. 32–35.

Harding, Tim ‘I Think I Am’, The Skeptic, Vol. 37 No. 1. March 2017, pp. 40-44.

Hughes, Austin L ‘The Folly of Scientism’, The New Atlantis, Number 37, Fall 2012, pp. 32-50.

Hutchinson, Ian. (2011) Monopolizing Knowledge: A Scientist Refutes Religion-Denying, Reason-Destroying Scientism. Belmont, MA: Fias Publishing.

Krauss, Lawrence ‘The Consolation of PhilosophyScientific American Mind, April 27, 2012.

Law, Stephen, ‘Scientism, the limits of science, and religionCenter for Inquiry (2016), Amherst, NY.

Novella, Steven (15 February 2013). ‘Scientific Skepticism, Rationalism, and Secularism’. Neurologica (blog). Retrieved 12 February 2017.

Sorell, Thomas (1994), Scientism: Philosophy and the Infatuation with Science, London: Routledge.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

4 Comments

Filed under Essays and talks

The professional model of representative democracy

Continue reading

3 Comments

Filed under Essays and talks

Skepticism – philosophical or scientific?

by Tim Harding

(This essay is based on a talk presented to the Victorian Skeptics in January 2017. An edited version was published in The Skeptic magazine Vol.37, No.1, March 2017, under the title ‘I Think I Am’).

Dictionaries often draw a distinction between the modern common meaning of skepticism, and its traditional philosophical meaning, which dates from antiquity.  The usual common dictionary definition is ‘a sceptical attitude; doubt as to the truth of something’; whereas the philosophical definition is ‘the theory that some or all types of knowledge are impossible’.  These definitions are of course quite different, and reflect the fact that the meanings of philosophical terms have drifted over the millennia.  The contemporary meaning of ‘scientific skepticism’ is different again, which I shall talk about later.

I should say at the outset that whilst I have a foot in both the scientific and philosophical camps, and although I will be writing here mainly about the less familiar philosophical skepticism, I personally support scientific skepticism over philosophical skepticism, for reasons I shall later explain.

picture5

But why are these definitions of skepticism important? And why do we spell it with a ‘k’ instead of a ‘c’.? As an admin of a large online skeptics group (Skeptics in Australia), I am often asked such questions, so I have done a bit of investigating.

As to the first question, one of the main definitional issues I have faced is the difference between skepticism and what I call denialism.  (The second question I shall answer later). Some skeptical newbies typically do a limited amount of googling, and what they often come up with is the common dictionary definition of skepticism, rather than the lesser known scientific skepticism definition that we Australian skeptics use.  They tend to think that ‘scepticism’ (with a ‘c’) entails doubting or being skeptical of everything, including science, medicine, vaccination, biotechnology, moon landings, 9/11 etc, etc.  When we scientific skeptics express a contrary view, we are sometimes then accused of ‘not being real sceptics’.  So I think that definitions are important.

In my view, denialism is a person’s choice to deny certain particular facts.  It is an essentially irrational belief where the person substitutes his or her personal opinion for established knowledge.  Science denialism is the rejection of basic facts and concepts that are undisputed, well-supported parts of the scientific consensus on a subject, in favour of radical and controversial opinions of an unscientific nature.  Most real skeptics accept the findings of peer-reviewed science published in reputable scientific journals, at least for the time being, unless and until it is corrected by the scientific community.

Denialism can then give rise to conspiracy theories, as a way of trying to explain the discrepancy between scientific facts and personal opinions.  Here is the typical form of what I call the Scientific Conspiracy Fallacy:

Premise 1: I hold a certain belief.

Premise 2: The scientific evidence is inconsistent with my belief.

Conclusion: Therefore, the scientists are conspiring with the Big Bad Government/CIA/NASA/Big Pharma (choose whichever is convenient) to fake the evidence and undermine my belief.

It is a tall order to argue that the whole of science is genuinely mistaken. That is a debate that even the conspiracy theorists know they probably can’t win. So the most convenient explanation for the inconsistency is that scientists are engaged in a conspiracy to fake the evidence in specific cases.

Ancient Greek Skepticism

The word ‘skeptic’ originates from the early Greek skeptikos, meaning ‘inquiring, reflective’.

The Hellenistic period covers the period of Greek and Mediterranean history between the death of Alexander the Great in 323 BCE and the Roman victory over Greeks at the Battle of Corinth in 146 BCE.  The beginning of this period also coincides with the death of the great philosopher, logician and scientist Aristotle of Stagira (384–322 BCE).

As he had no adult heir, Alexander’s empire was divided between the families of three of his generals.  This resulted in political conflicts and civil wars, in which prominent philosophers and other intellectuals did not want to take sides, in the interests of self-preservation.  So they retreated from public life into various cloistered schools of philosophy, the main ones being the Stoics, the Epicureans, the Cynics and the Skeptics.

As I mentioned earlier, the meanings of such philosophical terms have altered over 2000 years.  These philosophical schools had different theories as to how to attain eudaimonia, which roughly translates as the highest human good, or the fulfilment of human life.  They thought that the key to eudaimonia was to live in accordance with Nature, but they had different views as to how to achieve this.

In a nutshell, the Stoics advocated the development of self-control and fortitude as a means of overcoming destructive emotions.  The Epicureans regarded absence of pain and suffering as the source of happiness (not just hedonistic pleasure).   The Cynics (which means ‘dog like’) rejected conventional desires for wealth, power, health, or fame, and lived a simple life free from possessions.  Lastly, there were the Skeptics, whom I will now discuss in more detail.

During this Hellenistic period, there were actually two philosophical varieties of skepticism – the Academic Skeptics and the Pyrrhonist Skeptics.

In 266BCE, Arcesilaus became head of Platonic Academy.  The Academic Skeptics did not doubt the existence of truth in itself, only our capacities for obtaining it.  They went as far as thinking that knowledge is impossible – nothing can be known at all.  A later head of the Academy, Carneades modified this rather extreme position into thinking that ideas or notions are never true, but only probable.   He thought there are degrees of probability, hence degrees of belief, leading to degrees of justification for action.  Academic Skepticism did not really catch on, and largely died out in the first century CE, with isolated attempts at revival from time to time.

picture2

The founder of Pyrrhonist Skepticism, Pyrrho of Elis (c.365-c.275BCE) was born in Elis on west side of the Peloponnesian Peninsula (near Olympia).  Pyrrho travelled with Alexander the Great on his exploration of the East.  He encountered the Magi in Persia and even went as far as the Gymnosophists in India, who were naked ascetic gurus –  not exactly a good image for modern skepticism.

picture3

Pyrrho differed from the Academic Skeptics in thinking nothing can be known for certain.  He thought that their position ‘nothing can be known at all’ was dogmatic and self-contradictory, because it itself is a claim of certainty.  Pyrrho thought that the senses are easily fooled, and reason follows too easily our desires.  Therefore we should withhold assent from non-evident propositions and remain in a state of perpetual inquiry about them.  This means that we are not necessarily skeptical of ‘evident propositions’, and that at least some knowledge is possible.  This position is closer to modern skepticism than Academic Skepticism.  Indeed, Pyrrhonism became a synonym for skepticism in the 17th century CE; but we are not quite there yet.

Sextus Empiricus (c. 160 – c. 210 CE) was a Greco-Roman philosopher who promoted Pyrrhonian skepticism.  It is thought that the word ‘empirical’ comes from his name; although the Greek word empeiria also means ‘experience’.  Sextus Empiricus first questioned the validity of inductive reasoning, positing that a universal rule could not be established from an incomplete set of particular instances, thus presaging David Hume’s ‘problem of induction’ about 1500 years later.

Skeptic with a ‘k’

The Romans were great inventors and engineers, but they are not renowned for science or skepticism.  On the contrary, they are better known for being superstitious; for instance, the Roman Senate sat only on ‘auspicious days’ thought to be favoured by the gods.  They had lots of pseudoscientific beliefs that we skeptics would now regard as quackery or woo.  For example, they thought that cabbage was a cure for many illnesses; and in around 78CE, the Roman author Pliny the Elder wrote: ‘I find that a bad cold in the head clears up if the sufferer kisses a mule on the nose’.

So I cannot see any valid historical reason for us to switch from the early Greek spelling of ‘skeptic’ to the Romanised ‘sceptic’.  Yes, I know that ‘skeptic’ is the American spelling and ‘sceptic’ is the British spelling, but I don’t think that alters anything.  The most likely explanation is that the Americans adopted the spelling of the early Greeks and the British adopted that of the Romans.

picture1

Modern philosophical skepticism

Somewhat counter intuitively, the term ‘modern philosophy’ is used to distinguish more recent philosophy from the ancient philosophy of the early Greeks and the medieval philosophy of the Christian scholastics.  Thus ‘modern philosophy’ dates from the Renaissance of the 14th to the 17th centuries, although precisely when modern philosophy started within the Renaissance period is a matter of some scholarly dispute.

The defining feature of modern philosophical skepticism is the questioning the validity of some or all types of knowledge.  So before going any further, we need to define knowledge.

The branch of philosophy dealing with the study of knowledge is called ‘epistemology’.  The ancient philosopher Plato famously defined knowledge as ‘justified true belief’, as illustrated by the Venn diagram below.  According to this definition, it is not sufficient that a belief is true to qualify as knowledge – a belief based on faith or even just a guess could happen to be true by mere coincidence.  So we need adequate justification of the truth of the belief for it to become knowledge.  Although there are a few exceptions, known as ‘Gettier problems’, this definition of knowledge is still largely accepted by modern philosophers, and will do for our purposes here.  (Epistemology is mainly about the justification of true beliefs rather than this basic definition of knowledge).

picture4

There are also different types of knowledge that are relevant to this discussion.

A priori knowledge is knowledge that is known independently of experience.  For instance, we know that ‘all crows are birds’ without having to conduct an empirical survey of crows to investigate how many are birds and whether there are any crows that are not birds.  Crows are birds by definition – it is just impossible for there to be an animal that is a crow but is not a bird.

On the other hand, a posteriori knowledge is knowledge that is known by experience.  For instance, we only know that ‘all crows are black’ from empirical observations of crows.  It is not impossible that there is a crow that is not black, for example as a result of some genetic mutation.

The above distinction illustrates how not all knowledge needs to be empirical.  Indeed, one of the earliest modern philosophers and skeptics, Rene Descartes (1596-1650) was a French mathematician, scientist and philosopher.  (His name is where the mathematical word ‘Cartesian’ comes from).  These three interests of his were interrelated, in the sense that he had a mathematical and scientific approach to his philosophy.  Mathematics ‘delighted him because of its certainty and clarity’.  His fundamental aim was to attain philosophical truth by the use of reason and logical methods alone.  For him, the only kind of knowledge was that of which he could be certain.  His ideal of philosophy was to discover hitherto uncertain truths implied by more fundamental certain truths, in a similar manner to mathematical proofs.

Using this approach, Descartes engaged in a series of meditations to find a foundational truth of which he could be certain, and then to build on that foundation a body of implied knowledge of which he could also be certain.  He did this in a methodical way by first withholding assent from opinions which are not completely certain, that is, where there is at least some reason for doubt, such as those acquired from the senses.  Descartes concludes that one proposition of which he can be certain is ‘Cogito, ergo sum’ (which means ‘I think, therefore I exist’).

In contrast to Descartes, a different type of philosophical skeptic David Hume (1711-1776) held all human knowledge is ultimately founded solely in ‘experience’.  In what has become known as ‘Hume’s fork’, he held that statements are divided up into two types: statements about ideas are necessary statements that are knowable a priori; and statements about the world, which are contingent and knowable a posteriori.

In modern philosophical terminology, members of the first group are known as analytic propositions and members of the latter as synthetic propositions.  Into the first class fall statements such as ‘2 + 2 = 4’, ‘all bachelors are unmarried’, and truths of mathematics and logic. Into the second class fall statements like ‘the sun rises in the morning’, and ‘the Earth has precisely one moon’.

Hume tried to prove that certainty does not exist in science. First, Hume notes that statements of the second type can never be entirely certain, due to the fallibility of our senses, the possibility of deception (for example, the modern ‘brain in a vat’ hypothesis) and other arguments made by philosophical skeptics.  It is always logically possible that any given statement about the world is false – hence the need for doubt and skepticism.

Hume formulated the ‘problem of induction’, which is the skeptical question of whether inductive reasoning leads to knowledge understood in the classic philosophical sense.  This problem focuses on the alleged lack of justification for generalising about the properties of a class of objects based on some number of observations of particular instances of that class (for example, the inference that ‘all swans we have seen are white, and therefore, all swans are white’, before the discovery of black swans in Western Australia).

Immanuel Kant (1724-1804) was (and still is) a major philosophical figure who tried to show the way beyond the impasse which modern philosophy had led to between rationalists such as Descartes and empiricists such as Hume.  Kant is widely held to have synthesised these two early modern philosophical traditions.  And yet he was also a skeptic, albeit of a different variety.  Kant thought that only knowledge gained from empirical science is legitimate, which is a forerunner of modern scientific skepticism.  He thought that metaphysics was illegitimate and largely speculative; and in that sense he was a philosophical skeptic.

Scientific skepticism

In 1924, the Spanish philosopher Miguel de Unamuno disputed the common dictionary definition of skepticism.  He argued that ‘skeptic does not mean him who doubts, but him who investigates or researches as opposed to him who asserts and thinks that he has found’.  Sounds familiar, doesn’t it?

Modern scientific skepticism is different from philosophical skepticism, and yet to some extent was influenced by the ideas of Pyrrho of Elis, David Hume, Immanuel Kant and Miguel de Unamuno.

Most skeptics in the English-speaking world see the 1976 formation of the Committee for the Scientific Investigation of Claims of the Paranormal (CSICOP) in the United States as the ‘birth of modern skepticism’.  (CSICOP is now called the Committee for Skeptical Inquiry – CSI).  However, CSICOP founder and philosophy professor Paul Kurtz has said that he actually modelled it after the Belgian Comité Para of 1949.  The Comité Para was partly formed as a response to a predatory industry of bogus psychics who were exploiting the grieving relatives of people who had gone missing during the Second World War.

paul-kurtz

Kurtz recommended that CSICOP focus on testable paranormal and pseudoscientific claims and to leave religious aspects to others.  CSICOP popularised the usage of the terms ‘skeptic’, ‘skeptical’ and ‘skepticism’ by its magazine, Skeptical Inquirer, and directly inspired the foundation of many other skeptical organizations throughout the world, including the Australian Skeptics in 1980.

Through the public activism of groups such as CSICOP and the Australian Skeptics, the term ‘scientific skepticism’ has come to symbolise an activist movement as well as a type of applied philosophy.

There are several definitions of scientific skepticism, but the two that I think are most apt are those by the Canadian skeptic Daniel Loxton and the American skeptic Steven Novella.

Daniel Loxton’s definition is ‘the practice or project of studying paranormal and pseudoscientific claims through the lens of science and critical scholarship, and then sharing the results with the public.’

Steven Novella’s definition is ‘scientific skepticism is the application of skeptical philosophy, critical thinking skills, and knowledge of science and its methods to empirical claims, while remaining agnostic or neutral to non-empirical claims (except those that directly impact the practice of science).’  By this exception, I think he means religious beliefs that conflict with science, such as creationism or opposition to stem cell research.

In other words, scientific skeptics maintain that empirical investigation of reality leads to the truth, and that the scientific method is best suited to this purpose.  Scientific skeptics attempt to evaluate claims based on verifiability and falsifiability and discourage accepting claims on faith or anecdotal evidence.  This is different to philosophical skepticism, although inspired by it.

References

Descartes, R. (1641) Meditations on First Philosophy: With Selections from the Objections and Replies, trans. and ed. John Cottingham, Cambridge: Cambridge University Press.

Hume, David.(1748) An Enquiry Concerning Human Understanding . Gutenberg Press.

Kant, Immanuel (1787) Critique of Pure Reason 2nd edition.  Cambridge: Cambridge University Press.

Loxton, Daniel. (2013) Why Is There a Skeptical Movement? (PDF). Retrieved 12 January 2017.

Novella, Steven (15 February 2013). ‘Scientific Skepticism, Rationalism, and Secularism’. Neurologica (blog). Retrieved 12 February 2017.

Russell, Bertrand. (1961) History of Western Philosophy. 2nd edition London: George Allen & Unwin.

Unamuno, Miguel de., (1924) Essays and soliloquies London: Harrap.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

Leave a comment

Filed under Essays and talks

The Stoic theory of universals, as compared to Platonic and Aristotelian theories

By Tim Harding

The philosophical problem of universals has endured since ancient times, and can have metaphysical or epistemic connotations, depending upon the philosopher in question.  I intend to show in this essay that both Plato’s and the Stoics’ theories of universals were not only derived from, but were ‘in the grip’ of their epistemological and metaphysical philosophies respectively; and were thus vulnerable to methodological criticism.  I propose to first outline the three alternative theories of Plato, Aristotle and the Stoics; and then to suggest that Aristotle’s theory, whilst developed as a criticism of Plato’s theory, stands more robustly on its own merits.

According to the Oxford Companion to Philosophy, particulars are instances of universals, as a particular apple is an instance of the universal known as ‘apple’.  (An implication of a particular is that it can only be in one place at any one time, which presents a kind of paradox that will be discussed later in this essay).   Even the definition of the ‘problem of universals’ is somewhat disputed by philosophers, but the problem generally is about whether universals exist, and if so what is their nature and relationship to particulars (Honderich 1995: 646, 887).

Philosophers such as Plato and Aristotle who hold that universals exist are known as ‘realists’, although they have differences about the ontological relationships between universals and particulars, as discussed in this essay.  Those who deny the existence of universals are known as ‘nominalists’.  According to Long and Sedley (1987:181), the Stoics were a type of nominalist known as ‘conceptualists’, as I shall discuss later.

Plato’s theory of universals (although he does not actually use this term) stems from his theory of knowledge.  Indeed, it is difficult to separate Plato’s ontology from his epistemology (Copleston 1962: 142).  In his Socratic dialogue Timaeus, Plato draws a distinction between permanent knowledge gained by reason and temporary opinion gained from the senses.

That which is apprehended by intelligence and reason is always in the same state; but that which is conceived by opinion with the help of sensation and without reason, is always in a process of becoming and perishing and never really is (Plato Timaeus 28a).

According to Copleston (1962: 143-146), this argument is part of Plato’s challenge to Protagoras’ theory that knowledge is sense-perception.  Plato argues that sense-perception on its own is not knowledge.  Truth is derived from the mind’s reflection and judgement, rather than from bare sensations.  To give an example of what Plato means, we may have a bare sensation of two white surfaces, but in order to judge the similarity of the two sensations, the mind’s activity is required.

Plato argues that true knowledge must be infallible, unchanging and of what is real, rather than merely of what is perceived.  He thinks that the individual objects of sense-perception, or particulars, cannot meet the criteria for knowledge because they are always in a state of flux and indefinite in number (Copleston 1962: 149).  So what knowledge does meet Plato’s criteria?  The answer to this question leads us to the category of universals.  Copleston gives the example of the judgement ‘The Athenian Constitution is good’.  The Constitution itself is open to change, for better or worse, but what is stable in this judgement is the universal quality of goodness.  Hence, within Plato’s epistemological framework, true knowledge is knowledge of the universal rather than the particular (Copleston 1962: 150).

We now proceed from Plato’s epistemology to his ontology of universals and particulars.  In terms of his third criterion of true knowledge being what is real rather than perceived, the essence of Plato’s Forms is that each true universal concept corresponds to an objective reality (Copleston 1962: 151).  The universal is what is real, and particulars are copies or instances of the Form.  For example, particulars such as beautiful things are instances of the universal or Form of Beauty.

…nothing makes a thing beautiful but the presence and participation of beauty in whatever way or manner obtained; for as to the manner I am uncertain, but I stoutly contend that by beauty all beautiful things become beautiful (Plato Phaedo, 653).

Baltzly (2106: F5.2-6) puts the general structure of Plato’s argument this way:

What we understand when we understand what justice, beauty, or generally F-ness are, doesn’t ever change.

But the sensible F particulars that exhibit these features are always changing.

So there must be a non-sensible universal – the Form of F-ness – that we understand when we achieve episteme (true knowledge).

Plato’s explanation for where this knowledge of Forms comes from, if not from sense-perceptions, is our existence as unembodied souls prior to this life (Baltzly 2106: F5.2-6).  To me, this explanation sounds like a ‘retrofit’ to solve a consequential problem with Plato’s theory and is a methodological weakness of his account.

Turning now to Aristotle’s theory, whilst he shared Plato’s realism about the existence of universals, he had some fundamental differences about their ontological relationship to particulars.  In terms of Baltzly’s abovementioned description of Plato’s general argument, Plato thought that the universal, F-ness, could exist even if there were no F particulars.  In direct contrast, Aristotle held that there cannot be a universal, F-ness, unless there are some particulars that are F.  For example, Aristotle thought that the existence of the universal ‘humanity’ depends on there being actual instances of particular human beings (Baltzly 2106: F5.2-8).

As for the reality of universals, Aristotle agreed with Plato that the universal is the object of science.  For instance, the scientist is not concerned with discovering knowledge about particular pieces of gold, but with the essence or properties of gold as a universal.  It follows that if the universal is not real, if it has no objective reality, there is no scientific knowledge.  By Modus Tollens, there is scientific knowledge, and if scientific knowledge is knowledge of reality; then to be consistent, the universal must also be real (Copleston 1962: 301-302).  (Whilst it is outside the scope of this essay to discuss whether scientific knowledge describes reality, to deny that there is any scientific knowledge would have major implications for epistemic coherence).

This is not to say that universals have ‘substance’, meaning that they consist of matter and form.  Aristotle maintains that only particulars have substance, and that universals exist as properties of particulars (Russell 1961: 176).  Russell quotes Aristotle as saying:

It seems impossible that any universal term should be the name of a substance. For…the substance of each thing is that which is peculiar to it, which does not belong to anything else; but the universal is common, since that is called universal which is such as to belong to more than one thing.

In other words, Aristotle thinks that a universal cannot exist by itself, but only in particular things.  Russell attempts to illustrate Aristotle’s position using a football analogy.  The game of football (a universal) cannot exist without football players (particulars); but the football players would still exist even if they never actually played football (Russell 1961: 176).

In almost complete contrast to both Plato and Aristotle, the Stoics denied the existence of universals, regarding them as concepts or mere figments of the rational mind.  In this way, the Stoics anticipated the conceptualism of the British empirical philosophers, such as Locke (Long and Sedley 1987:181).

The Stoic position is complicated by their being on the one hand materialists, and on the other holding a belief that there are non-existent things which ‘subsist’, such as incorporeal things like time and fictional entities such as a Centaur.  Their ontological hierarchy starts with the notion of a ‘something’, which they thought of as a proper subject of thought and discourse, whether or not it exists.  ‘Somethings’ can be subdivided into material bodies or corporeals, which exist; and incorporeals and things that are neither corporeal or incorporeal such as fictional entities, which subsist (Long and Sedley 1987:163-164).  Long and Sedley (1987:164) provide colourful examples of the distinction between existing and subsisting by saying:

There’s such a thing as a rainbow, and such a character as Mickey Mouse, but they don’t actually exist.

A significant exclusion from the Stoic ontological hierarchy is universals.  Despite the subsistence of a fictional character like Mickey Mouse, the universal man neither exists nor subsists, which is a curious inconsistency.  Stoic universals are dubbed by the neo-Platonist philosopher Simplicius (Long and Sedley 1987:180) as ‘not somethings’:

(2) One must also take into account the usage of the Stoics about generically qualified things—how according to them cases are expressed, how in their school universals are called ‘not-somethings’ and how their ignorance of the fact that not every substance signifies a ‘this Something’ gives rise to the Not-someone sophism, which relies on the form of expression.

Long and Sedley (1987:164) surmise from this analysis that for the Stoics, to be a ‘something’ is to be a particular, whether existent or subsistent.  Stoic ontology is occupied exclusively by particulars without universals.  In this way, universals are relegated to a metaphysical limbo, as far as the Stoics are concerned.  Nevertheless, they recognise the concept of universals as being not just a linguistic convenience but as useful conceptions or ways of thinking.  For this reason, Long and Sedley (1987:181-182) classify the Stoic position on universals as ‘conceptualist’, rather than simply nominalist.  (Nominalists think of universals simply as names for things that particulars have in common).  In a separate paper, Sedley (1985: 89) makes the distinction between nominalism and conceptualism using the following example:

After all the universal man is not identical with my generic thought of man; he is what I am thinking about when I have that thought.

One of the implications of a particular is that it can only be in one place at any one time, which gives rise to what was referred to above by Simplicius as the ‘Not-someone sophism’.  Sedley (1985: 87-88) paraphrases this sophism in the following terms:

If you make the mistake of hypostatizing the universal man into a Platonic abstract individual-if, in other words you regard him as ‘someone’-you will be unable to resist the following evidently  fallacious syllogism.  ‘If someone  is in Athens, he is not in Megara.  But man is in Athens. Therefore man is not in Megara.’ The improper step  here is clearly  the substitution of ‘man’ in the minor premiss for ‘someone’ in the major premiss. But it can be remedied only by the denial that the  universal man  is ‘someone’.  Therefore the universal man is not-someone.

Baltlzly (2016: F5.2-15) makes that point that the same argument would serve to show that time is a not-something, yet the Stoics inconsistently accept that time subsists as an incorporeal something.

I have attempted to show above that Plato and the Stoics are locked into their theories about universals as a result of their prior philosophical positions.  Although to argue otherwise could make them vulnerable to criticisms of inconsistency, they at the same time have methodological weaknesses that place them on shakier ground than Aristotelian realism.  However, I am also of the view that apart from these methodological issues, Aristotelian Realism is substantively a better theory than Platonic Realism or Stoic Conceptualism or Nominalism.  In coming to this view, I have relied mainly on the work of the late Australian Philosophy Professor David Armstrong.

Armstrong argues that there are universals which exist independently of the classifying mind.  No universal is found except as either a property of a particular or as a relation between particulars.  He thus rejects both Platonic Realism and all varieties of Nominalism (Armstrong 1978: xiii).

Armstrong describes Aristotelian Realism as allowing that particulars have properties and that two different particulars may have the very same property.  However, Aristotelian Realism rejects any transcendent account of properties, that is, an account claiming that universals exist separated from particulars (Armstrong 1975: 146).  Armstrong argues that we cannot give an account of universality in terms of particularity, as the various types of Nominalism attempt to do.  Nor can we give an account of particulars in terms universals, as the Platonic Realists do.  He believes that ‘while universality and particularity cannot be reduced to each other, they are interdependent, so that properties are always properties of a particular, and whatever is a particular is a particular having certain properties’ (Armstrong 1975: 146).

According to Armstrong, what is a genuine property of particulars is to be decided by scientific investigation, rather than simply a linguistic or conceptual classification (Armstrong 1975: 149).  Baltzly (2016: F5.2-18) paraphrases Armstrong’s argument this way:

  1. There are causes and effects in nature.

  2. Whether one event c causes another event e is independent of the classifications we make.

  3. Whether c causes e or not depends on the properties had by the things that figure in the events.

  4. So properties are independent of the classifications that we make and if this is so, then predicate nominalism and conceptualism are false.

Baltzly (2016: F5.2-18, 19) provides an illustration of this argument based on one given by Armstrong (1978: 42-43).  The effect of throwing brick against a window will result from the physical properties of the brick and window, in terms of their relative weight and strength, independently of how we name or classify those properties.  So in this way, I would argue that the properties of particulars, that is universals, are ‘real’ rather than merely ‘figments of the mind’ as the Stoics would say.

As for Platonic Realism, Armstrong argues that if we reject it then we must reject the view that there are any uninstantiated properties (Armstrong 1975: 149); that is, the view that properties are transcendent beings that exist apart from their instances, such as in universals rather than particulars.  He provides an illustration of a hypothetical property of travelling faster than the speed of light.  It is a scientific fact that no such property exists, regardless of our concepts about it (Armstrong 1975: 149).  For this reason, Armstrong upholds ‘scientific realism’ over Platonic Realism, which he thinks is consistent with Aristotelian Realism – a position that I support.

In conclusion, I have attempted to show in this essay that the Aristotelian theory of universals is superior to the equivalent theories of both Plato and the Stoics.  I have argued this in terms of the relative methodologies as well as the substantive arguments.  I would choose the most compelling argument to be that of epistemic coherence regarding scientific knowledge, that is, that the universal is the object of science.  It follows that if the universal is not real, if it has no objective reality, then there is no scientific knowledge.  There is scientific knowledge, and if scientific knowledge is knowledge of reality; then to be consistent, the universal must also be real.

Bibliography

Armstrong, D.M. ‘Towards a Theory of Properties: Work in Progress on the Problem of Universals’ Philosophy, (1975), Vol.50 (192), pp.145-155.

Armstrong, D.M. ‘Nominalism and Realism’ Universals and Scientific Realism Volume 1, (1978) Cambridge: Cambridge University Press.

Baltzly, D. ATS3885: Stoic and Epicurean Philosophy Unit Reader (2016). Clayton: Faculty of Arts, Monash University.

Copleston, F. A History of Philosophy Volume 1: Greece and Rome (1962) New York: Doubleday.

Honderich, T. Oxford Companion to Philosophy (1995) Oxford: Oxford University Press.

Long A. A. and Sedley, D. N. The Hellenistic Philosophers, Volume 1 (1987). Cambridge: Cambridge University Press.

Plato, Phaedo in The Essential Plato trans. Benjamin Jowett, Book-of-the-Month Club (1999).

Plato, Timaeus in The Internet Classics Archive. http://classics.mit.edu//Plato/timaeus.html
Viewed 2 October 2016.

Russell, B. History of Western Philosophy. 2nd edition (1961) London: George Allen & Unwin.

Sedley, D. ‘The Stoic Theory of Universals’ The Southern Journal of Philosophy (1985) Vol. XXIII. Supplement.

Leave a comment

Filed under Essays and talks

The Fallacy of Faulty Risk Assessment

by Tim Harding

(An edited version of this essay was published in The Skeptic magazine, September 2016, Vol 36 No 3)

Australian Skeptics have tackled many false beliefs over the years, often in co-operation with other organisations.  We have had some successes – for instance, belief in homeopathy finally seems to be on the wane.  Nevertheless, false beliefs about vaccination and fluoridation just won’t lie down and die – despite concerted campaigns by medical practitioners, dentists, governments and more recently the media.  Why are these beliefs so immune to evidence and arguments?

There are several possible explanations for the persistence of these false beliefs.  One is denialism – the rejection of established facts in favour of personal opinions.  Closely related are conspiracy theories, which typically allege that facts have been suppressed or fabricated by ‘the powers that be’, in an attempt by denialists to explain the discrepancies between their opinions and the findings of science.  A third possibility is an error of reasoning or fallacy known as Faulty Risk Assessment, which is the topic of this article.

Before going on to discuss vaccination and fluoridation in terms of this fallacy, I would like to talk about risk and risk assessment in general.

What is risk assessment?

Hardly anything we do in life is risk-free. Whenever we travel in a car or even walk along a footpath, most people are aware that there is a small but finite risk of being injured or killed.  Yet this risk does not keep us away from roads.  We intuitively make an informal risk assessment that the level of this risk is acceptable in the circumstances.

In more formal terms, ‘risk’ may be defined as the probability or likelihood of something bad happening multiplied by the resulting cost/benefit ratio if it does happen.  Risk analysis is the process of discovering what risks are associated with a particular hazard, including the mechanisms that cause the hazard, then estimating the likelihood that the hazard will occur and the consequences if it does occur.

Risk assessment is the determination of the acceptability of risk using two dimensions of measurement – the likelihood of an adverse event occurring; and the severity of the consequences if it does occur, as illustrated in the diagram below.  (This two-dimensional risk assessment is a conceptually useful way of ranking risks, even if one or both of the dimensions cannot be measured quantitatively).

risk-diagram

By way of illustration, the likelihood of something bad happening could be very low, but the consequences could be unacceptably high – enough to justify preventative action.  Conversely, the likelihood of an event could be higher, but the consequences could low enough to justify ‘taking the risk’.

In assessing the consequences, consideration needs to be given to the size of the population likely to be affected, and the severity of the impact on those affected.  This will provide an indication of the aggregate effect of an adverse event.  For example, ‘high’ consequences might include significant harm to a small group of affected individuals, or moderate harm to a large number of individuals.

A fallacy is committed when a person either focuses on the risks of an activity and ignores its benefits; and/or takes account one dimension of risk assessment and overlooks the other dimension.

To give a practical example of a one-dimensional risk assessment, the desalination plant to augment Melbourne’s water supply has been called a ‘white elephant’ by some people, because it has not been needed since the last drought broke in March 2010.  But this criticism ignores the catastrophic consequences that could have occurred had the drought not broken.  In June 2009, Melbourne’s water storages fell to 25.5% of capacity, the lowest level since the huge Thomson Dam began filling in 1984.  This downward trend could have continued at that time, and could well be repeated during the inevitable next drought.

wonthaggi

Melbourne’s desalination plant at Wonthaggi

No responsible government could afford to ‘take the risk’ of a major city of more than four million people running out of water.  People in temperate climates can survive without electricity or gas, but are likely to die of thirst in less than a week without water, not to mention the hygiene crisis that would occur without washing or toilet flushing.  The failure to safeguard the water supply of a major city is one of the most serious derelictions of government responsibility imaginable.

Turning now to the anti-vaccination and anti-fluoridation movements, they both commit the fallacy of Faulty Risk Assessment.  They focus on the very tiny likelihood of adverse side effects without considering the major benefits to public health from vaccination and the fluoridation of public water supplies, and the potentially severe consequences of not vaccinating or fluoridating.

Vaccination risks

The benefits of vaccination far outweigh its risks for all of the diseases where vaccines are available.  This includes influenza, pertussis (whooping cough), measles and tetanus – not to mention the terrible diseases that vaccination has eradicated from Australia such as smallpox, polio, diphtheria and tuberculosis.

As fellow skeptic Dr. Rachael Dunlop puts it:  ‘In many ways, vaccines are a victim of their own success, leading us to forget just how debilitating preventable diseases can be – not seeing kids in calipers or hospital wards full of iron lungs means we forget just how serious these diseases can be.’

No adult or teenager has ever died or become seriously ill in Australia from the side effects of vaccination; yet large numbers of people have died from the lack of vaccination.  The notorious Wakefield allegation in 1998 of a link between vaccination and autism has been discredited, retracted and found to be fraudulent.  Further evidence comes from a recently published exhaustive review examining 12,000 research articles covering eight different vaccines which also concluded there is no link between vaccines and autism.

According to Professor C Raina MacIntyre of UNSW, ‘Influenza virus is a serious infection, which causes 1,500 to 3,500 deaths in Australia each year.  Death occurs from direct viral effects (such as viral pneumonia) or from complications such as bacterial pneumonia and other secondary bacterial infections. In people with underlying coronary artery disease, influenza may also precipitate heart attacks, which flu vaccine may prevent.’

In 2010, increased rates of high fever and febrile convulsions were reported in children under 5 years of age after they were vaccinated with the Fluvax vaccine.  This vaccine has not been registered for use in this age group since late 2010 and therefore should not be given to children under 5 years of age. The available data indicate that there is a very low risk of fever, which is usually mild and transient, following vaccination with the other vaccine brands.  Any of these other vaccines can be used in children aged 6 months and older.

Australia was declared measles-free in 2005 by the World Health Organization (WHO) – before we stopped being so vigilant about vaccinating and outbreaks began to reappear.  The impact of vaccine complacency can be observed in the 2015 measles epidemic in Wales where there were over 800 cases and one death, and many people presenting were of the age who missed out on MMR vaccination following the Wakefield scare.

After the link to autism was disproven, many anti-vaxers shifted the blame to thiomersal, a mercury-containing component of relatively low toxicity to humans.  Small amounts of thiomersal were used as a preservative in some vaccines, but not the MMR vaccine.  Thiomersal was removed from all scheduled childhood vaccines in 2000.

In terms of risk assessment, Dr. Dunlop has pointed out that no vaccine is 100% effective and vaccines are not an absolute guarantee against infection. So while it’s still possible to get the disease you’ve been vaccinated against, disease severity and duration will be reduced.  Those who are vaccinated have fewer complications than people who aren’t.  With pertussis (whooping cough), for example, severe complications such as pneumonia and encephalitis (brain inflammation) occur almost exclusively in the unvaccinated.  So since the majority of the population is vaccinated, it follows that most people who get a particular disease will be vaccinated, but critically, they will suffer fewer complications and long-term effects than those who are completely unprotected.

Fluoridation risks

Public water fluoridation is the adjustment of the natural levels of fluoride in drinking water to a level that helps protect teeth against decay.  In many (but not all) parts of Australia, reticulated drinking water has been fluoridated since the early 1960s.

The benefits of fluoridation are well documented.  In November 2007, the NHMRC completed a review of the latest scientific evidence in relation to fluoride and health.  Based on this review, the NHMRC recommended community water fluoridation programs as the most effective and socially equitable community measure for protecting the population from tooth decay.  The scientific and medical support for the benefits of fluoridation certainly outweighs the claims of the vocal minority against it.

Fluoridation opponents over the years have claimed that putting fluoride in water causes health problems, is too expensive and is a form of mass medication.  Some conspiracy theorists go as far as to suggest that fluoridation is a communist plot to lower children’s IQ.  Yet, there is no evidence of any adverse health effects from the fluoridation of water at the recommended levels.  The only possible risk is from over-dosing water supplies as a result of automated equipment failure, but there is inline testing of fluoride levels with automated water shutoffs in the remote event of overdosing.  Any overdose would need to be massive to have any adverse effect on health.  The probability of such a massive overdose is extremely low.

Tooth decay remains a significant problem. In Victoria, for instance, more than 4,400 children under 10, including 197 two-year-olds and 828 four-year-olds, required general anaesthetic in hospital for the treatment of dental decay during 2009-10.  Indeed, 95% of all preventable dental admissions to hospital for children up to nine years old in Victoria are due to dental decay. Children under ten in non-optimally fluoridated areas are twice as likely to require a general anaesthetic for treatment of dental decay as children in optimally fluoridated areas.

As fellow skeptic and pain management specialist Dr. Michael Vagg has said, “The risks of general anaesthesia for multiple tooth extractions are not to be idly contemplated for children, and far outweigh the virtually non-existent risk from fluoridation.”  So in terms of risk assessment, the risks from not fluoridating water supplies are far greater than the risks of fluoridating.

Implications for skeptical activism

Anti-vaxers and anti-fluoridationists who are motivated by denialism and conspiracy theories tend to believe whatever they want to believe, and dogmatically so.  Thus evidence and arguments are unlikely to have much influence on them.

But not all anti-vaxxers and anti-fluoridationists fall into this category.  Some may have been misled by false information, and thus could possibly be open to persuasion if the correct information is provided.

Others might even be aware of the correct information, but are assessing the risks fallaciously in the ways I have described in this article.  Their errors are not ones of fact, but errors of reasoning.  They too might be open to persuasion if education about sound risk assessment is provided.

I hope that analysing the false beliefs about vaccination and fluoridation from the perspective of the Faulty Risk Assessment Fallacy has provided yet another weapon in the skeptical armoury against these false beliefs.

References

Rachael Dunlop (2015) Six myths about vaccination – and why they’re wrong. The Conversation, Parkville.

C Raina MacIntyre (2016) Thinking about getting the 2016 flu vaccine? Here’s what you need to know. The Conversation, Parkville.

Mike Morgan (2012) How fluoride in water helps prevent tooth decay.  The Conversation, Parkville.

Michael Vagg (2013) Fluoride conspiracies + activism = harm to children. The Conversation, Parkville.

 Government of Victoria (2014) Victorian Guide to Regulation. Department of Treasury and Finance, Melbourne.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

Leave a comment

Filed under Essays and talks

Epicurean free will

by Tim Harding

Epicurus’ philosophy of mind is perhaps best explained in terms of Epicurean physics.  Epicurus was a materialist who thinks that the natural world is all that exists, so his physics is a general theory of what exists and its nature, including human bodies and minds (O’Keefe 2010: 11-12).

Epicureans thought that there are only two things that exist per se – atoms and void.  Atoms are the indivisible, most basic particles of matter, which move through void, which is empty space (O’Keefe 2010: 11-12).  Objects as we know them are compounds of atoms, and their various natures are explicable in terms of the different properties or attributes of their constituent atoms (Baltzly 2016: 02-1).

When Epicurus refers to the ‘soul’ he means what we today refer to as the mind, so ‘mind’ is the term I shall use here.  He identifies the mind with a compound of four types of atoms – air, heat, wind and a fourth nameless substance (Long and Sedley 1987: 14C).  Because the mind is composed of atoms, it must be corporeal – only the void is incorporeal (Long and Sedley 1987: 14A).  The mind is a part of the body (located in the chest), responsible for sensation, imagination, emotion and memory (Long and Sedley 1987: 14A, 14B, 15D).  Other functions belong to the ‘spirit’ which provides sensory input to, and carries out the instructions of the mind throughout the body (Long and Sedley 1987: 14B).

According to O’Keefe (2010: 62-63), another Epicurean argument for believing that mind is corporeal is as follows:

Premise 1: The mind moves the body and is moved by the body.

Premise 2: Only bodies can move and be moved by other bodies.

Conclusion: Therefore, the mind is a body.

Long and Sedley (1987:107) identify Epicurus as arguably the first philosopher to recognise what we now know as the philosophical Problem of Free Will.  This problem is if it has been causally necessary we should act as we do, it cannot be up to us, therefore we cannot be morally responsible for our actions (Long and Sedley 1987: 20A).  On the other hand, Epicurus notes that ‘we rebuke, oppose and reform each other as if the responsibility lay also in ourselves’ [Long and Sedley 1987: 20C(2)].

According to Cicero, ‘Epicurus thinks that necessity of fate is avoided by the swerve of atoms’ [Long and Sedley 1987: 20E(2)].  Baltzly explains this ‘atomic swerve’ as atoms moving a minimal distance sideways, apparently for no reason at all, from time to time.  This swerve from their natural downward motion results in atomic collisions (Baltzly 2016: F2.2-14).  Although this swerve is not explicitly mentioned by Epicurus himself, Cicero writes that:

‘Epicurus’ reason for introducing this theory was his fear that, if the atom’s motion was always the result of natural and necessary weight, we would have no freedom, since the mind would be moved in whatever way it was compelled by the movement of atoms’ [Long and Sedley 1987: 20E(3)].

Lucretius presents an argument that the atomic swerve enables free will (Long and Sedley 1987: 20F).  O’Keefe (2010: 74-75) states this argument in the following form:

Premise 1: If the atoms did not swerve, there would not be ‘free will’.

Premise 2: There is free will.

Conclusion: Therefore, atoms swerve.

This argument is logically valid, so if the premises are true the conclusion must be true.  Lucretius spends most of this passage trying to show that Premise 2 is true.  However, even if Premise 2 is true, we do not know that Premise 1 is true.  The atomic swerve introduces a slight element of indeterminacy, but this swerve does not necessarily entail free will, since no mechanism is given to explain the connection between these two concepts.  Indeed, Annas (1991: 87) argues that there is a fundamental problem in thinking of human motivation in terms of only the motion of atoms.  She thinks that occurrence of atomic swerves in ordinary macro-objects has no effect on them (Annas 1991: 96-97).  For this reason, I do not think that the introduction of random atomic swerves solves the Problem of Free Will.

Sedley (1987: 107) agrees that taken in isolation such a solution is ‘notoriously unsatisfactory’.  He offers an alternative explanation in terms of ‘development’ which contributes psychological autonomy and which is distinct from the atoms in a kind of differential or transcendent way (Long and Sedley 1987: 107-18).  In other words, these distinct developments are psychological rather than physical properties of the mind.  In particular, the development of consciousness which is an ‘emergent’ property of complex atomic systems like human beings (Baltzly 2016: F2.2 – 17).

In a later paper, Sedley provides some more detail on what he means by emergent properties:

‘I take Epicurus to be sketching some sort of theory of radically emergent properties.  Matter in certain complex states can, he holds, acquire entirely new , non-physical properties, not governed by the laws of physics’ (Sedley 1988: 323-324).

It is important to note that Sedley is attempting here to make a connection between free will and the atomic swerve.  As Baltzly (2016: F2.2 – 18) puts it, the swerve means that not every motion of the atoms which make up our bodies is determined by those atoms themselves.  Baltzly thinks that the swerve does not introduce an element of randomness or indeterminacy into our free choices:

‘Rather, the swerve leaves a gap where the psychological properties of my soul [mind] can cause something to happen where behaviour of the atoms that make up my soul [mind] leave it open what will happen’ (Baltzly 2016: F2.2 – 18).

My own view is that Sedley and Baltzly provide a plausible explanation of the connection between Epicurus’ atomic swerve and free will.  It is possible that consciousness is an emergent psychological property of the material mind.  Free will could be seen as a manifestation of consciousness.  Whilst we cannot yet fully explain what consciousness is and how is works, there is little doubt that consciousness exists.  If consciousness can exist, then so can free will.  However, where I part company with Sedley is that I find Epicurus’ theory of the atomic swerve unconvincing.  Neither Epicurus nor his followers provide any evidence for the existence of the atomic swerve.  It has been postulated as a kind of ‘retrofit’ in an attempt to solve the problem of free will by introducing an imaginary element of indeterminacy.  I think that Sedley’s idea of emergence could help to explain free will even in the absence of the Epicurean atomic swerve.

I would now like to draw towards a conclusion about Epicurus’ philosophy of mind, by comparing it with the theories of his competitors.  According to O’Keefe (2010: 80-83), these were mainly Carneades (214-129BCE) the head of the skeptical academy; and Chrysippus (c.280-206BCE) the third head of the Stoic school.

The most relevant criticism of Carneades is that positing a motion without a cause, like the atomic swerve, would be beside the point in solving the problem of free will (O’Keefe 2010: 82).  Carneades’ solution is to say that all events, including human actions, have causes   These actions are the result of ‘voluntary motions of the mind’ rather than external causes.  He thinks that there is no reason to posit, in addition, a fundamental indeterminism like the atomic swerve (O’Keefe 2010: 82).  In this way, Carneades was perhaps the forerunner of a compatibilist solution to the problem of free will, allowing both determinism and voluntary choices to co-exist.

Chrysippus criticises Epicurus from the opposite direction.  He shows that causal determinism does not make the future inevitable in a manner that renders action or deliberation futile.  In this way, determinism is compatible with human agency (O’Keefe 2010: 82).

In conclusion, I think that Sedley, Carneades and Chrysippus have pointed the way towards a compatibilist solution to the problem of free will, that does not depend on the dubious Epicurean postulation of the atomic swerve.  I therefore think that their approaches to this problem are more compelling than those of Epicurus.

Bibliography

Annas, J. ‘Epicurus’ Philosophy of Mind’ Companions to Ancient Thought: 2 Psychology, S. Everson, ed. (1991) Cambridge: Cambridge University Press.

Baltzly, D. ATS3885: Stoic and Epicurean Philosophy Unit Reader (2016). Clayton: Faculty of Arts, Monash University.

Long A. A. and Sedley, D. N. The Hellenistic Philosophers, Volume 1 (1987). Cambridge: Cambridge University Press.

O’Keefe, T. Epicureanism. (2010). Berkeley: University of California Press.

Sedley D. ‘Epicurean Anti-Reductionism’ in Jonathan Barnes Mario Mignucci (ed.), Matter and Metaphysics. Bibliopolis 295–327 (1988).

Follow me on Academia.edu

Copyright notice: © All rights reserved. Except for personal use or as permitted under the Australian Copyright Act, no part of this website may be reproduced, stored in a retrieval system, communicated or transmitted in any form or by any means without prior written permission. All inquiries should be made to the copyright owner, Tim Harding at tim.harding@yandoo.com, or as attributed on individual blog posts.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

Leave a comment

Filed under Essays and talks

The Medieval Agrarian Economy

by Tim Harding

This striking image depicts the three main classes of medieval society – the clergy, the knights and the peasantry.[1]  Tellingly, the cleric and the knight are shown talking to each other; but the peasant is excluded from the conversation.  Even though the peasants comprised over 90% of the population, they were in many ways marginalized socially and economically.  So who were these peasants and what was their daily life like?

striking

Source of image: Wikimedia Commons

The term ‘peasant’ essentially means a traditional farmer of the Middle Ages, although in everyday language it has come to mean a lower class agricultural labourer.  In the Central Middle Ages, that is the period from 1000 to 1300CE, European peasants were divided into four classes according to their legal status and their relationship to the land they farmed.  These classes were slave, serf, free tenant or land owner.  The first two classes were usually much poorer than the second two.

There were several factors that influenced the lives of peasants during this period.  The reciprocal benefits of agricultural labour and warrior protection gave rise to closely settled manorial and feudal communities.[2]  More land was brought under cultivation by the communal clearing of forests, draining of swamps and the building of levees or dykes.[3] (Drafting note: I’ll leave it to Corey to provide text about the relevant social and legal relationships).  

The invention of a heavier wheeled plow enabled deeper cultivation of soils, including the burying of green manure from fallow land and also stubble from previous crops.  The deeper furrows also protected seed from wind and birds.[4]

plough

Source of image: Wikimedia Commons

There was also a period of warmer temperatures, milder winters and higher rainfall at this time, resulting in longer growing seasons.[5]  Another important factor was the replacement of the Roman two-field rotation system by a more efficient three-field system, enabling two-thirds of the land to be under cultivation at any one time, instead of only half the land.  This image shows the three cropping fields (West, South and East) of a typical rural community, with the remaining quarter devoted to pasture, the Manor house and Church.[6]

rural community

Source of image: Bennett, Judith M., Medieval Europe – A Short History
(New York: McGraw-Hill, 2011). p. 142.

Interestingly, the typical length of a plow-strip was 220 yards, called a furlong (a word still used in horse racing today).  The width of a plow-strip was a rod, and a rectangle of 4 rods by one furlong became an acre.[7] (Four rods later became a ‘chain’ of 22 yards, so an acre was an area one furlong by one chain).

The resulting increases in agricultural yields raised farm production above subsistence levels for the first time in centuries.   These surpluses not enabled not only trade, but also the storage of produce such as oats for the feeding of horses.  This in turn enabled the replacement of plow-pulling oxen by horses that required less pasture that could be reallocated to cropping.  Horses also moved and turned faster than oxen, resulting in even more efficiencies.[8]

Crop yields for wheat improved to an estimated four times the quantity of grain sown.  Typically, one quarter of the yield was reserved for the next planting, one or two quarters went to the lord of the manor as rent, and the remainder was either consumed as bread or beer, stored for the winter or sold at local markets.[9]

Few peasants could afford meat to eat – they mainly lived on bread, beer and vegetables grown by women and children in small cottage gardens, plus eggs from chickens and milk from cows and goats.  Those living in coastal areas also ate fish. [10]

 Bibliography

Backman, Clifford R., The Worlds of Medieval Europe (Oxford: Oxford University Press, 2015).

Bennett, Judith M., Medieval Europe – A Short History (New York: McGraw-Hill, 2011).

Endnotes

[1] Bennett, Judith M., Medieval Europe – A Short History (New York: McGraw-Hill, 2011) p.135.

[2] Backman, Clifford R., The Worlds of Medieval Europe (Oxford: Oxford University Press, 2015) p.215

[3] Bennett, p.140.

[4] Backman, p.218.

[5] Bennett, p.139.

[6] Bennett, p.140-142.

[7] Backman, p.217.

[8] Backman, p.218.

[9] Backman, p.219.

[10] Backman, p.220.

Leave a comment

Filed under Essays and talks

Did Descartes think that animals have feelings?

by Tim Harding

It is a common misconception that Descartes held the view that because animals cannot think, they have no feelings and do not suffer pain.  In 1952, this view was described by the Scottish philosopher and psychologist Norman Kemp Smith as a ‘monstrous thesis’ (Cottingham 1978: 554-556).  In this essay, I intend to examine two questions – firstly, whether Descartes actually held this view and secondly, whether this view is entailed by his other views about animal minds.  My answer is essentially that whilst the text references are somewhat unclear on this specific point, it is unlikely that Descartes held this view or that it was entailed by his other related views.

descartes

Rene Descartes (1596 – 1650CE)

Part of the problem in discussing these questions is a lack of clarity amongst Descartes’ objectors (and even Descartes himself) in the meanings of key terms such as ‘consciousness’, ‘self-consciousness’, ‘thought’, ‘awareness’, ‘feelings’ and ‘sensations’.  In an attempt to clarify the issues, Cottingham (1978: 551) helpfully suggests that the views attributed to Descartes be broken down in to a number of distinct propositions:

(1)  Animals are machines.

(2)  Animals are automata.

(3)  Animals do not think.

(4)  Animals have no language.

(5)  Animals have no self-consciousness.

(6)  Animals have no consciousness.

(7)  Animals are totally without feeling.

Cottingham (1978: 552) argues that whilst Descartes advocated propositions (1) to (6), there is no evidence that he supported Proposition (7).  Nor is Proposition (7) entailed by the earlier propositions (Cottingham 1978: 554-556).  I will return to Proposition (7) later, after I have discussed the definitions of some key terms and the earlier propositions.

Proposition (1) is not asserted by Descartes in this explicit form; but Cottingham (1978: 552) argues that this is what Descartes means in Part V of his Discourse on Method, where he says that the body may be regarded ‘like a machine..’.  It is important to note that for Descartes, the human body is a machine in the same sense as an animal body.  This view is part of Descartes’ general scientific ‘mechanism’ where all animal behaviour is explainable in terms of physiological laws (Cottingham 1978: 552).

The definition of ‘automaton’ in Proposition (2) is significant, as it has led to some confusion in the descriptions of Descartes’ views.  Cottingham (1978: 553) argues that the primary Webster dictionary definition of ‘automaton’ is ‘a machine that is relatively self-operating’ (which is the Ancient Greek meaning of the ‘auto’ prefix).  It does not entail the absence or incapability of feeling, as some of Descartes’ critics have alleged (Cottingham 1978: 553).  What Descartes is saying is that the complex sequence of movements of machines, such as the moving statues found at the time in some of the royal fountains, could all be explained in terms of internal mechanisms such as cogs, levers and the like.  Descartes’ point here is that the mere complexity of animal movements is no more a bar to explanation of their behaviour than is the case with the movements of these fountain statues (Cottingham 1978: 553).

Regarding Proposition (3), a crucial and central difference between animals and human beings for Descartes is that animals do not think.  In a letter to the English philosopher Henry More dated 5 February 1649, Descartes says that ‘there is no prejudice to which we are all more accustomed from our earliest years than the belief that the dumb animals think’.  He also says that they do not have a mind; they lack reason; and they do not have a rational soul (Cottingham 1978: 554).  Descartes defined ‘thought’ in his Second Replies to the Meditations as follows: ‘Thought is a word that covers everything that exists in us in such a way that we are immediately conscious of it. Thus all the operations of will, intellect, imagination, and of the senses are thoughts’ (Radner and Radner, 1989: 22).  Descartes’ inclusion of the senses in this definition is ambiguous, as I will discuss later.

For Descartes, Proposition (3) is entailed by Proposition (4) claiming the absence of language in animals.  In a letter to the Marquess of Newcastle dated 23 November 1646, Descartes makes the point that the utterances of animals are never what the modern linguist Chomsky calls ‘stimulus free’ – they are always geared to and elicited by external factors (Cottingham 1978: 555; Radner and Radner, 1989: 41).  Descartes explains in his letter that the words of parrots do not count as language because they are not ‘relevant’ to the particular situation.  By contrast, even the ravings of insane persons are ‘relevant to particular topics’ though they do not ‘follow reason’ (Radner and Radner, 1989: 45).  This brings us to what is known as Descartes’ ‘language test’ – the ability to put words together in different ways that are appropriate to a wide variety of situations (Radner and Radner, 1989: 41).

In an attempt to overcome certain objections and counter examples, Descartes later modifies his language test to claim that animals never communicate anything pertaining to ‘pure thought’, which he means unaccompanied by any corporeal process or functions of the body (Radner and Radner, 1989: 48).  This modification is what is known as Descartes’ ‘action test’, which has been stated by Radner and Radner (1989: 50) as:

‘In order to determine whether a creature of type A is acting through reason, you compare its performance with that of creatures that do act through reason.  If A’s performance falls short of B’s, where B is a creature that acts through reason, then A does not act through reason but only from the disposition of its organs.  The B always stands for human beings because they are the only beings known for sure to have reason.  Only in the human case do we have direct access to the reasoning process.’

As for Propositions (5) and (6), whilst Descartes provides an explicit definition of ‘thought’ he does not offer one of ‘consciousness’, let alone ‘self-consciousness’ (Radner and Radner, 1989: 22-25).  Yet he inextricably links thought to consciousness in the Fourth Replies when he says ‘we cannot have any thought of which we are not aware at the very moment when it is in us’.  This implies that for Descartes, consciousness is not the act of thinking, but our awareness of our acts of thinking (Radner and Radner, 1989: 22-25).  This raises some complex issues regarding an infinite regression of thoughts (Radner and Radner, 1989: 22-25); but I need not discuss those issues for my current purposes.   Radner and Radner (1989: 30) suggest that self-consciousness is not necessarily the same thing as consciousness. It is the awareness of self, that it is one’s self that is having conscious thoughts.

With respect to Proposition (7), Cottingham (1978: 556-557) argues that Descartes did not commit himself to the view that animals do not have feelings or sensations.  He quotes from Descartes 1649 letter to More, where he says that the sounds made by livestock and companion animals are not genuine language , but are ways of ‘communicating to us…their natural impulses of anger, fear, hunger and so on’.  In the same letter, Descartes writes: ‘I should like to stress that I am talking of thought, not of…sensation; for …I deny sensation to no animal, in so far as it depends on a bodily organ.  Cottingham also quotes from Descartes 1646 letter to Newcastle, where he wrote: ‘If you teach a magpie to say good-day to its mistress when it sees her coming, all you can possibly have done is to make the emitting of this word the expression of one of its feelings.’  In other words, Descartes denies in these letters that animals think, but not that they feel (Cottingham 1978: 557).

Notwithstanding the apparent vindication of Descartes in the text of these letters, Cottingham (1978: 557) next argues that Proposition (7) is consistent with Descartes dualism.  Since an animal has no mind or soul, it follows that that it must belong wholly in the extended divisible world of corporeal substances.  Cottingham (1978: 557) thinks that this must be the authentic Cartesian position, presumably because the central importance of dualism to Cartesian metaphysics.  On the other hand, I would argue that a lack of Cartesian thought does not entail a lack of feeling or sensation, as I discuss under Proposition (3) below.

The next question to consider is whether any of Propositions (1) to (6) are true; and if so, whether Proposition (7) is entailed by any of these earlier propositions that are true.

With respect to Proposition (1) I would argue that if the human body is a machine and humans have feelings, then it does not follow from this proposition alone that because animals are machines, they do not have feelings.  Similarly, even if Proposition (2) is true, it does not follow from the definition of automaton that animals do not have feelings either (Cottingham 1978: 553).

Proposition (3) is probably the area of greatest contention.  Radner and Radner (1989: 13) cite empirical evidence as far back as Aristotle indicating at least the possibility of thought by animals.  Aristotle cites the nest-building behaviour of swallows, where they mix mud and chaff.  If they run short of mud, they douse themselves with water and roll in the dust.  He also reports that a mother nightingale has been observed to give singing lessons to her young (Radner and Radner, 1989: 13).  More recently, there is a video on YouTube of a mother Labrador teaching her puppy how to go down stairs.[1]  There is another video of a crow solving a complex puzzle that most human children would have difficulty with.[2] Whilst nest building and singing teaching are arguably instinctive bird behaviours, dogs teaching puppies about stairs and crows solving complex puzzles are less likely to be instinctive.  They indicate the possibility of animals planning things in their minds.

Cottingham argues that even if Proposition (3) is true, it does not follow that Descartes is committed to a position that animals do not have feelings.  This is because Descartes separates feelings and sensations from thinking – for example a level of feeling or sensation that fall short of reflective awareness (Cottingham 1978: 555-556).  Radner and Radner suggest that the word ‘sensation’ is ambiguous for Descartes.  On the one hand, it could refer to the corporeal process of the transmission of nerve impulses to the brain; yet on the other hand it can also refer to the mental awareness that is associated with the corporeal process (Radner and Radner 1989: 22).

Another area of contention is in relation to Proposition (4).  Gassendi objected that Descartes was being unfair to animals in judging ‘language’ in only human terms.  He suggested that animals could have languages of their own that we do not understand (Radner and Radner 1989: 45).  I would add that human sign language illustrates that language need not be exclusively vocal.  Radner and Radner suggest that the natural cries and gestures of animals can be appropriate to the situation and can communicate useful information to other animals.  For example, a Thomson’s gazelle, seeing a predator lurking in the distance, assumes an alert posture and gives a short snort.  The other gazelles within hearing distance immediately stop grazing and look in the same direction.  The message is not just ‘I’m scared’ but it conveys a warning to look up and over in this direction (Radner and Radner 1989: 45).

Thomson's gazelles

Thomson’s gazelles

Radner and Radner (1989: 102-103) argue that neither the language test nor the action test lead to the conclusion that animals lack consciousness.  Either animals pass the language test or it is not a test of thought in the Cartesian sense.  The Radners argue that even if we were to grant that action test shows that animals fail to act through reason it still does not establish that they lack all modes of Cartesian thought (Radner and Radner 1989: 103).  I would also argue that Descartes modification of the language test to an ‘action test’ results in a proposition similar to Proposition (3) about thinking which I have already discussed.

In conclusion, I have tried to clarify the various propositions and key terms involved in the allegation that Descartes believed that animals do have feelings or sensations.  I have supported Cottingham’s view that the relevant texts by Descartes do not substantiate this allegation.  I have also supported Cottingham’s view that Propositions (1) to (6) do not entail Proposition (7), including by the use of some recent empirical evidence.  However, I do not support Cottingham’s view that Descartes’ dualism is inconsistent with his views about animal minds.

 BIBLIOGRAPHY

Cottingham, J., ‘A Brute to the Brutes?  Descartes’ Treatment of Animals’, Philosophy 53 (1978), pp. 551-59.

Radner, D., and Radner, M., (1989) Animal Consciousness. Buffalo, Prometheus Books.

[1] https://www.youtube.com/watch?v=Ht5dFBMgOGs

[2] https://www.youtube.com/watch?v=uNHPh8TEAXM

Leave a comment

Filed under Essays and talks

The Birth of Experimental Science

by Tim Harding

(An edited version of this essay was published in The Skeptic magazine,
June 2016, Vol 36, No. 2, under the title ‘Out of the Dark’).

To the ancient Greeks, science was simply the knowledge of nature.  The acquisition of such knowledge was theoretical rather than experimental.  Logic and reason were applied to observations of nature in attempts to discover the underlying principles influencing phenomena.

After the Dark Ages, the revival of classical logic and reason in Western Europe was highly significant to the development of universities and subsequent intellectual progress.  It was also a precursor to the development of empirical scientific methods in the thirteenth century, which I think were even more important because of the later practical benefits of science to humanity.  The two most influential thinkers in development of scientific methods at this time were the English philosophers Robert Grosseteste (1175-1253) and Roger Bacon (c.1219/20-c.1292). (Note: Roger Bacon is not to be confused with Francis Bacon).

Apart from the relatively brief Carolingan Renaissance of the late eighth century to the ninth century, intellectual progress in Western Europe generally lagged behind that of the Byzantine and Islamic parts of the former Roman Empire.[1]  But from around 1050, Arabic, Jewish and Greek intellectual manuscripts started to become more available in the West in Latin translations.[2] [3]  These translations of ancient works had a major impact on Medieval European thought.  For instance, according to Pasnau, ‘when James of Venice translated Aristotle’s Posterior Analytics from Greek into Latin in the second quarter of the twelfth century, ‘European philosophy got one of the great shocks of its long history’.[4] This book had a dramatic impact on ‘natural philosophy’, as science was then called.

Under Pope Gregory VII, a Roman synod had in 1079 decreed that all bishops institute the teaching of liberal arts in their cathedrals.[5]  In the early twelfth century, universities began to emerge from Cathedral schools, in response to the Gregorian reform and demands for literate administrators, accountants, lawyers and clerics.  The curriculum was loosely based on the seven liberal arts, consisting of a trivium of grammar, dialectic and rhetoric; plus a quadruvium of music, arithmetic, geometry and astronomy.[6]  Besides the liberal arts, some (but not all) universities offered three professional courses of law, medicine and theology.[7]

Dialectic was a method of learning by the use of arguments in a question and answer format, heavily influenced by the translations of Aristotle’s works.  This was known as ‘Scholasticism’ and included the use of logical reasoning as an alternative to the traditional appeals to authority.[8] [9]  For the first time, philosophers and scientists studied in close proximity to theologians trained to ask questions.[10]

At this stage, the most influential scientist was Robert Grosseteste (1175-1253) who was a leading English scholastic philosopher, scientist and theologian.  After studying theology in Paris from 1209 to 1214, he made his academic career at Oxford, becoming its Chancellor in 1234.[11]  He later became the Bishop of Lincoln, where there is now a university named after him. According to Luscombe, Grosseteste ‘seems to be the single most influential figure in shaping an Oxford interest in the empirical sciences that was to endure for the rest of the Middle Ages’.[12]

Grossetesste

Robert Grossteste  (1175-1253)

Grosseteste’s knowledge of Greek enabled him to participate in the translation of Aristotelian science and ethics.[13] [14]  In the first Latin commentary on Aristotle’s Posterior Analytics, from the 1220s, Robert Grosseteste distinguishes four ways in which we might speak of scientia, or scientific knowledge.

‘It does not escape us, however, that having scientia is spoken of broadly, strictly, more strictly, and most strictly. [1] Scientia commonly so-called is [merely] comprehension of truth. Unstable contingent things are objects of scientia in this way. [2] Scientia strictly so-called is comprehension of the truth of things that are always or most of the time in one way. Natural things – namely, natural contingencies – are objects of scientia in this way. Of these things there is demonstration broadly so-called. [3] Scientia more strictly so-called is comprehension of the truth of things that are always in one way. Both the principles and the conclusions in mathematics are objects of scientia in this way. [4] Scientia most strictly so-called is comprehension of what exists immutably by means of the comprehension of that from which it has immutable being. This is by means of the comprehension of a cause that is immutable in its being and its causing.’[15]

Grosseteste’s first and second ways of describing scientia refer to the truth of the way things are by demonstration, that is by empirical observation.

Grosseteste himself went beyond Aristotelian science by investigating natural phenomena mathematically as well as empirically in controlled laboratory experiments.  He studied the refraction of light through glass lenses and drew conclusions about rainbows as the refraction of light through rain drops.[16]

Although Grosseteste is credited with introducing the idea of controlled scientific experiments, there is doubt whether he made this idea part of a general account of a scientific method for arriving at the principles of demonstrative science. [17]  This role fell to his disciple Roger Bacon (c.1219/20-c.1292CE) who was who was also an English philosopher, but unlike Bishop Grosseteste, Bacon was a Franciscan friar.

Roger Bacon (c.1219/20-c.1292)

Bacon taught in the Oxford arts faculty until about 1247, when he moved to Paris which he disliked and where he made himself somewhat unpopular.  The only Parisian academic he admired was Peter of Maricourt, who reinforced the importance of experiment in scientific research and of mathematics to certainty.[18]

As a scientist, Roger Bacon continued Grosseteste’s investigation of optics in a laboratory setting.  He supplemented these optical experiments with studies of the physiology of the human eye by dissecting the eyes of cattle and pigs.[19]  Bacon also investigated the geometry of light, thus further applying mathematics to empirical observations.  According to Colish, ‘the very idea of treating qualities quantitatively was a move away from Aristotle, who held that quality and quantity are essentially different’.[20]

The most important work of Roger Bacon was his Opus Majus (Latin for ‘Greater Work’) written c.1267CE.  Part Six of this work contains a study of Experimental Science, in which Bacon advocates the verification of scientific reasoning by experiment.

‘…I now wish to unfold the principles of experimental science, since without experience nothing can be sufficiently known. For there are two modes of acquiring knowledge, namely, by reasoning and experience. Reasoning draws a conclusion and makes us grant the conclusion, but does not make the conclusion certain, nor does it remove doubt so that the mind may rest on the intuition of truth, unless the mind discovers it by the path of experience;..’[21]

Bacon’s aim was to provide a rigorous method for empirical science, analogous to the use of logic to test the validity of deductive arguments.  This new practical method consisted of a combination of mathematics and detailed experiential descriptions of discrete phenomena in nature. [22]  Roger Bacon illustrated his method by an investigation into the nature and cause of the rainbow.  For instance, he calculated the measured value of 42 degrees for the maximum elevation of the rainbow.  This was probably done with an astrolabe, and by this technique, Bacon advocated the skillful mathematical use of instruments for an experimental science.[23]

Optics from Roger Bacon’s De multiplicatone specierum

The optical experiments that both Grosseteste and Bacon conducted were of practical usefulness in correcting deficiencies in human eyesight and the later invention of the telescope.  But more importantly, Roger Bacon is credited with being the originator of empirical scientific methods that were later further developed by scientists such as Galileo Galilei, Francis Bacon and Robert Hooke.  This is notwithstanding the twentieth century criticism of inductive scientific methods by philosophers of science such as Karl Popper, in favour of empirical falsification.[24]

The benefits of science to humanity – especially medical science – are well known and one example should suffice here.  An essential component of medical science is the clinical trial, which is the empirical testing of a proposed treatment on a group of patients whilst using another group of untreated patients as a blind control group to isolate and statistically measure the effectiveness of the treatment, whilst keeping all other factors constant.  This empirical approach is vastly superior to the theoretical approach of ancient physicians such as Hippocrates and Galen, and owes much to the pioneering work of Grosseteste and Bacon.  This is why I think that the development of empirical scientific methods was even more important than the revival of classical logic and reason, in terms of practical benefits to humanity. However, it is somewhat ironic that the later clashes between religion and science had their origins in the pioneering experiments of a bishop and a friar.

Whilst the twelfth century revival of classical logic and reason was very significant in terms of Western intellectual progress generally, the development of empirical scientific methods were in my view the most important intellectual endeavor of the European thirteenth century; and Bacon’s contribution to this was greater than that of Grosseteste because he devised general methodological principles for later scientists to build upon.

BIBIOGRAPHY

 Primary sources

Bacon, Roger, Opus Majus. a Translation by Robert Belle Burke. (New York, Russell & Russell, 1962).

Grosseteste, Robert, Commentarius in Posteriorum Analyticorum Libros. In Pasnau, Robert ‘Science and Certainty,’ R. Pasnau (ed.) Cambridge History of Medieval Philosophy (Cambridge: Cambridge University Press, 2010).

Secondary works

Colish, Marcia, L., Medieval foundations of the Western intellectual tradition (New Haven: Yale University Press, 1997).

Hackett, Jeremiah, ‘Roger Bacon’, The Stanford Encyclopedia of Philosophy (Spring 2015 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/spr2015/entries/roger-bacon/&gt;.

Kenny, Anthony Medieval Philosophy  (Oxford: Clarendon Press 2005).

Lewis, Neil, ‘Robert Grosseteste’, The Stanford Encyclopedia of Philosophy (Summer 2013 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/sum2013/entries/grosseteste/&gt;.

Luscombe, David, Medieval thought (Oxford: Oxford University Press, 1997).

Moran Cruz, Jo Ann and Richard Geberding, ‘The New Learning, 1050-1200’, in Medieval Worlds: An Introduction to European History, 300-1492 (Boston: Houghton Mifflin, 2004), pp.350-376.

Pasnau, Robert ‘Science and Certainty,’ in R. Pasnau (ed.) Cambridge History of Medieval Philosophy (Cambridge: Cambridge University Press, 2010).

Popper, Karl The Logic of Scientific Discovery. (London and New York 1959).

ENDNOTES

[1] Colish, Marcia, L., Medieval foundations of the Western intellectual tradition (New Haven: Yale University Press, 1997).pp.x-xi

[2] Moran Cruz, Jo Ann and Richard Geberding, ‘The New Learning, 1050-1200’, in Medieval Worlds: An Introduction to European History, 300-1492 (Boston: Houghton Mifflin, 2004), p.351.

[3] Colish, p.274.

[4] Pasnau, Robert ‘Science and Certainty,’ in R. Pasnau (ed.) Cambridge History of Medieval Philosophy (Cambridge: Cambridge University Press, 2010) p.357.

[5] Moran Cruz and Geberding p.351.

[6] Ibid. p.353

[7] Ibid. p. 356.

[8] Ibid, p.354.

[9] Colish, p.169.

[10] Colish, p.266.

[11] Colish, p.320.

[12] Luscombe, David, Medieval thought (Oxford: Oxford University Press, 1997). p.87.

[13] Colish, p.320.

[14] Luscombe, p.86.

[15] Grosseteste, Robert, Commentarius in Posteriorum Analyticorum Libros. In Pasnau, Robert ‘Science and Certainty,’ R. Pasnau (ed.) Cambridge History of Medieval Philosophy (Cambridge: Cambridge University Press, 2010) p. 358..

[16] Colish, p.320.

[17] Lewis, Neil, ‘Robert Grosseteste’, The Stanford Encyclopedia of Philosophy (Summer 2013 Edition), Edward N. Zalta (ed.),

[18] Kenny, Anthony Medieval Philosophy  (Oxford: Clarendon Press 2005). p.80.

[19] Colish, p.321.

[20] Colish, pp.321-322.

[21] Bacon, Roger Opus Majus. a Translation by Robert Belle Burke. (New York, Russell & Russell, 1962) p.583

[22] Hackett, Jeremiah, ‘Roger Bacon’, The Stanford Encyclopedia of Philosophy (Spring 2015 Edition), Edward N. Zalta (ed.), Section 5.4.3.

[23] Hackett, Section 5.4.3.

[24] Popper, Karl The Logic of Scientific Discovery.(London and New York 1959). Ch. 1.’…the theory to be developed in the following pages stands directly opposed to all attempts to operate with the ideas of inductive logic.’

Copyright notice: © All rights reserved. Except for personal use or as permitted under the Australian Copyright Act, no part of this website may be reproduced, stored in a retrieval system, communicated or transmitted in any form or by any means without prior written permission. All inquiries should be made to the copyright owner, Tim Harding at tim.harding@yandoo.com, or as attributed on individual blog posts.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

2 Comments

Filed under Essays and talks

Descartes’ cogito and certainty

By Tim Harding

Rene Descartes (1596-1650CE) was a French mathematician, scientist and philosopher.  According to Copleston (1994:63-89) these three interests of his were interrelated, in the sense that he had a mathematical and scientific approach to his philosophy.  Mathematics ‘delighted him because of its certainty and clarity’ (Copleston 1994: 64).  His fundamental aim was to attain philosophical truth by the use of reason and scientific methods.  For him, the only kind of knowledge was that of which he could be certain.  His ideal of philosophy was to discover hitherto uncertain truths implied by more fundamental certain truths, in a similar manner to mathematical proofs (Copleston 1994: 66-70).

Using this approach, Descartes (1996) engages in a series of meditations to find a foundational truth of which he could be certain, and then to build on that foundation a body of implied knowledge of which he could also be certain.  He does this in a methodical way in his First Meditation by first withholding assent from opinions which are not completely certain, that is, where there is at least some reason for doubt, such as those acquired from the senses (Descartes 1996: 12).

descartes

Next, in his Second Meditation, Descartes concludes that one proposition of which he can be certain is ‘I am, I exist’ (Descartes 1996: 12).  Interestingly, in this text Descartes does not actually use the famous words ‘Cogito, ergo sum’ (which mean ‘I think, therefore I exist’) which he used in a slightly earlier work Discourse on Method.  This difference in wording has implications for the discussion which follows in this essay; however, for simplicity, I shall refer to this proposition as ‘the cogito’.

The central question for this essay is – how did Descartes come to be certain that the cogito is true?  There are rival interpretations of the basis of this certainty.  Is it as a result of an inference from the premise ‘I think’, or is it derived from a different type of reasoning in which ‘I think’ is not needed as a premise?  The former is in the traditional form of an argument in which a conclusion is logically deduced from one or more premises; whereas the latter is in not in the form an argument, such as intuition, or as I shall later suggest a ‘performative utterance’.

One of the difficulties in making these interpretations is that Descartes himself is not entirely consistent in the various expositions of his views in different texts, nor in his responses to objections to those views.  Another difficulty is that certain philosophical or linguistic concepts such ‘performative utterance’ had not been developed that at that time.  To clarify, I intend to analyse the cogito in terms of modern day philosophy, rather than as a historical investigation into what Descartes meant at the time.

The first interpretation is that the cogito is a deductive argument with a missing but implied first premise in the following traditional syllogistic form:

Premise 1: Everything that thinks exists.

Premise 2: I think.

Conclusion: Therefore, I exist.

This is a valid deductive argument known from antiquity as modus ponens.  The general form is ‘P implies Q; P is asserted to be true, so therefore Q must be true’.  As is the case with all valid deductive arguments, if the premises are true, then the conclusion must be true by virtue of the argument’s logical form.  However, the problem in this particular case is that we do not know that Premise 1 is true – that has not yet been established.  So although the argument is valid we cannot say that the conclusion ‘I exist’ is true on this basis.  For this reason, I do not think that this interpretation of the cogito is the correct one.

It is worth mentioning that Descartes himself denies that the cogito is a syllogism in his reply to the Second Objections:

When we observe that we are thinking beings, this is a sort of primary notion, which is not the conclusion of any syllogism; and, moreover, when somebody says ‘I am thinking, therefore I am or exist’, he is not using a syllogism to deduce his existence from his thought, but recognising this as something self-evident, in a simple mental intuition (Descartes 1996: 68).

Descartes’ words in this quotation are consistent with the alternative interpretation that the proposition ‘I exist’ is self-evident as a result of intuition.  In this interpretation, we do not need either of the premises ‘Everything that thinks exists’ or ‘I think’ so there is no inference or deductive argument, let alone a syllogism.

I find the notion of intuition too vague for philosophical purposes – it seems to belong more in the realm of psychology or neuroscience.

Williams (1978) has endeavored to explain this alternative interpretation in terms of incorrigibility and self-verification.  A proposition p is incorrigible when it satisfies this description: if I believe that p, then p, for example ‘If I feel pain, I am in pain’.  This explanation has some similarities to Austin’s concept of a performative utterance (or ‘performative’ for short) where the utterance of a statement (in the appropriate circumstances) serves not only to describe an act but to actually perform the act (Austin 1962: 6).  So to say ‘I exist’ performs the act of existing.  The statement could not be made unless the person making it exists.

According to Williams (1978) the proposition ‘I think’ is self-evident because it satisfies the description: if p, then I believe that p.  If I think, then I believe that I think.  The proposition ‘I think’ is thus evident to me, in a way that the proposition ‘I exist’ is not.  While ‘I exist’ is incorrigible, it is not evident to me in the same way that ‘I think’ is evident.  Under this interpretation, Descartes has a reason for choosing to begin his argument with the premise ‘I think’ (Townsend 2004: 26).

In summary, the interpretation of the cogito being an inference or deductive fails for the reasons I have given.  In my view, the combination of incorrigibility and self-verification provides a sufficient justification for the truth of the statement ‘I think, I am’, especially when incorrigibility is explained in terms of a performance utterance.

REFERENCES

Austin, J.L. (1962) How To Do Things With Words. London, Oxford University Press.

Copleston, F. (1994) A History of Philosophy Volume IV: Modern Philosophy. New York, Bantam Doubleday Publishing Group.

Descartes, R. (1996) Meditations on First Philosophy: With Selections from the Objections and Replies, trans. and ed. John Cottingham, Cambridge, Cambridge University Press.

Williams, B. (1978) ‘Descartes: the Project of Pure Enquiry’ in Descartes and the Defence of Reason, 2004Study Guide ed. Aubrey Townsend, Clayton, Monash University.

Townsend, A. ed. (2004) Descartes and the Defence of Reason, Study Guide, Clayton, Monash University.

Leave a comment

Filed under Essays and talks