Category Archives: Essays and talks

Essays and talks written by Tim Harding

Skepticism – philosophical or scientific?

by Tim Harding

(This essay is based on a talk presented to the Victorian Skeptics in January 2017. An edited version was published in The Skeptic magazine Vol.37, No.1, March 2017 ).

Dictionaries often draw a distinction between the modern common meaning of skepticism, and its traditional philosophical meaning, which dates from antiquity.  The usual common dictionary definition is ‘a sceptical attitude; doubt as to the truth of something’; whereas the philosophical definition is ‘the theory that some or all types of knowledge are impossible’.  These definitions are of course quite different, and reflect the fact that the meanings of philosophical terms have drifted over the millennia.  The contemporary meaning of ‘scientific skepticism’ is different again, which I shall talk about later.

I should say at the outset that whilst I have a foot in both the scientific and philosophical camps, and although I will be writing here mainly about the less familiar philosophical skepticism, I personally support scientific skepticism over philosophical skepticism, for reasons I shall later explain.

picture5

But why are these definitions of skepticism important? And why do we spell it with a ‘k’ instead of a ‘c’.? As an admin of a large online skeptics group (Skeptics in Australia), I am often asked such questions, so I have done a bit of investigating.

As to the first question, one of the main definitional issues I have faced is the difference between skepticism and what I call denialism.  (The second question I shall answer later). Some skeptical newbies typically do a limited amount of googling, and what they often come up with is the common dictionary definition of skepticism, rather than the lesser known scientific skepticism definition that we Australian skeptics use.  They tend to think that ‘scepticism’ (with a ‘c’) entails doubting or being skeptical of everything, including science, medicine, vaccination, biotechnology, moon landings, 9/11 etc, etc.  When we scientific skeptics express a contrary view, we are sometimes then accused of ‘not being real sceptics’.  So I think that definitions are important.

In my view, denialism is a person’s choice to deny certain particular facts.  It is an essentially irrational belief where the person substitutes his or her personal opinion for established knowledge.  Science denialism is the rejection of basic facts and concepts that are undisputed, well-supported parts of the scientific consensus on a subject, in favour of radical and controversial opinions of an unscientific nature.  Most real skeptics accept the findings of peer-reviewed science published in reputable scientific journals, at least for the time being, unless and until it is corrected by the scientific community.

Denialism can then give rise to conspiracy theories, as a way of trying to explain the discrepancy between scientific facts and personal opinions.  Here is the typical form of what I call the Scientific Conspiracy Fallacy:

Premise 1: I hold a certain belief.

Premise 2: The scientific evidence is inconsistent with my belief.

Conclusion: Therefore, the scientists are conspiring with the Big Bad Government/CIA/NASA/Big Pharma (choose whichever is convenient) to fake the evidence and undermine my belief.

It is a tall order to argue that the whole of science is genuinely mistaken. That is a debate that even the conspiracy theorists know they probably can’t win. So the most convenient explanation for the inconsistency is that scientists are engaged in a conspiracy to fake the evidence in specific cases.

Ancient Greek Skepticism

The word ‘skeptic’ originates from the early Greek skeptikos, meaning ‘inquiring, reflective’.

The Hellenistic period covers the period of Greek and Mediterranean history between the death of Alexander the Great in 323 BCE and the Roman victory over Greeks at the Battle of Corinth in 146 BCE.  The beginning of this period also coincides with the death of the great philosopher, logician and scientist Aristotle of Stagira (384–322 BCE).

As he had no adult heir, Alexander’s empire was divided between the families of three of his generals.  This resulted in political conflicts and civil wars, in which prominent philosophers and other intellectuals did not want to take sides, in the interests of self-preservation.  So they retreated from public life into various cloistered schools of philosophy, the main ones being the Stoics, the Epicureans, the Cynics and the Skeptics.

As I mentioned earlier, the meanings of such philosophical terms have altered over 2000 years.  These philosophical schools had different theories as to how to attain eudaimonia, which roughly translates as the highest human good, or the fulfilment of human life.  They thought that the key to eudaimonia was to live in accordance with Nature, but they had different views as to how to achieve this.

In a nutshell, the Stoics advocated the development of self-control and fortitude as a means of overcoming destructive emotions.  The Epicureans regarded absence of pain and suffering as the source of happiness (not just hedonistic pleasure).   The Cynics (which means ‘dog like’) rejected conventional desires for wealth, power, health, or fame, and lived a simple life free from possessions.  Lastly, there were the Skeptics, whom I will now discuss in more detail.

During this Hellenistic period, there were actually two philosophical varieties of skepticism – the Academic Skeptics and the Pyrrhonist Skeptics.

In 266BCE, Arcesilaus became head of Platonic Academy.  The Academic Skeptics did not doubt the existence of truth in itself, only our capacities for obtaining it.  They went as far as thinking that knowledge is impossible – nothing can be known at all.  A later head of the Academy, Carneades modified this rather extreme position into thinking that ideas or notions are never true, but only probable.   He thought there are degrees of probability, hence degrees of belief, leading to degrees of justification for action.  Academic Skepticism did not really catch on, and largely died out in the first century CE, with isolated attempts at revival from time to time.

picture2

The founder of Pyrrhonist Skepticism, Pyrrho of Elis (c.365-c.275BCE) was born in Elis on west side of the Peloponnesian Peninsula (near Olympia).  Pyrrho travelled with Alexander the Great on his exploration of the East.  He encountered the Magi in Persia and even went as far as the Gymnosophists in India, who were naked ascetic gurus –  not exactly a good image for modern skepticism.

picture3

Pyrrho differed from the Academic Skeptics in thinking nothing can be known for certain.  He thought that their position ‘nothing can be known at all’ was dogmatic and self-contradictory, because it itself is a claim of certainty.  Pyrrho thought that the senses are easily fooled, and reason follows too easily our desires.  Therefore we should withhold assent from non-evident propositions and remain in a state of perpetual inquiry about them.  This means that we are not necessarily skeptical of ‘evident propositions’, and that at least some knowledge is possible.  This position is closer to modern skepticism than Academic Skepticism.  Indeed, Pyrrhonism became a synonym for skepticism in the 17th century CE; but we are not quite there yet.

Sextus Empiricus (c. 160 – c. 210 CE) was a Greco-Roman philosopher who promoted Pyrrhonian skepticism.  It is thought that the word ‘empirical’ comes from his name; although the Greek word empeiria also means ‘experience’.  Sextus Empiricus first questioned the validity of inductive reasoning, positing that a universal rule could not be established from an incomplete set of particular instances, thus presaging David Hume’s ‘problem of induction’ about 1500 years later.

Skeptic with a ‘k’

The Romans were great inventors and engineers, but they are not renowned for science or skepticism.  On the contrary, they are better known for being superstitious; for instance, the Roman Senate sat only on ‘auspicious days’ thought to be favoured by the gods.  They had lots of pseudoscientific beliefs that we skeptics would now regard as quackery or woo.  For example, they thought that cabbage was a cure for many illnesses; and in around 78CE, the Roman author Pliny the Elder wrote: ‘I find that a bad cold in the head clears up if the sufferer kisses a mule on the nose’.

So I cannot see any valid historical reason for us to switch from the early Greek spelling of ‘skeptic’ to the Romanised ‘sceptic’.  Yes, I know that ‘skeptic’ is the American spelling and ‘sceptic’ is the British spelling, but I don’t think that alters anything.  The most likely explanation is that the Americans adopted the spelling of the early Greeks and the British adopted that of the Romans.

picture1

Modern philosophical skepticism

Somewhat counter intuitively, the term ‘modern philosophy’ is used to distinguish more recent philosophy from the ancient philosophy of the early Greeks and the medieval philosophy of the Christian scholastics.  Thus ‘modern philosophy’ dates from the Renaissance of the 14th to the 17th centuries, although precisely when modern philosophy started within the Renaissance period is a matter of some scholarly dispute.

The defining feature of modern philosophical skepticism is the questioning the validity of some or all types of knowledge.  So before going any further, we need to define knowledge.

The branch of philosophy dealing with the study of knowledge is called ‘epistemology’.  The ancient philosopher Plato famously defined knowledge as ‘justified true belief’, as illustrated by the Venn diagram below.  According to this definition, it is not sufficient that a belief is true to qualify as knowledge – a belief based on faith or even just a guess could happen to be true by mere coincidence.  So we need adequate justification of the truth of the belief for it to become knowledge.  Although there are a few exceptions, known as ‘Gettier problems’, this definition of knowledge is still largely accepted by modern philosophers, and will do for our purposes here.  (Epistemology is mainly about the justification of true beliefs rather than this basic definition of knowledge).

picture4

There are also different types of knowledge that are relevant to this discussion.

A priori knowledge is knowledge that is known independently of experience.  For instance, we know that ‘all crows are birds’ without having to conduct an empirical survey of crows to investigate how many are birds and whether there are any crows that are not birds.  Crows are birds by definition – it is just impossible for there to be an animal that is a crow but is not a bird.

On the other hand, a posteriori knowledge is knowledge that is known by experience.  For instance, we only know that ‘all crows are black’ from empirical observations of crows.  It is not impossible that there is a crow that is not black, for example as a result of some genetic mutation.

The above distinction illustrates how not all knowledge needs to be empirical.  Indeed, one of the earliest modern philosophers and skeptics, Rene Descartes (1596-1650) was a French mathematician, scientist and philosopher.  (His name is where the mathematical word ‘Cartesian’ comes from).  These three interests of his were interrelated, in the sense that he had a mathematical and scientific approach to his philosophy.  Mathematics ‘delighted him because of its certainty and clarity’.  His fundamental aim was to attain philosophical truth by the use of reason and logical methods alone.  For him, the only kind of knowledge was that of which he could be certain.  His ideal of philosophy was to discover hitherto uncertain truths implied by more fundamental certain truths, in a similar manner to mathematical proofs.

Using this approach, Descartes engaged in a series of meditations to find a foundational truth of which he could be certain, and then to build on that foundation a body of implied knowledge of which he could also be certain.  He did this in a methodical way by first withholding assent from opinions which are not completely certain, that is, where there is at least some reason for doubt, such as those acquired from the senses.  Descartes concludes that one proposition of which he can be certain is ‘Cogito, ergo sum’ (which means ‘I think, therefore I exist’).

In contrast to Descartes, a different type of philosophical skeptic David Hume (1711-1776) held all human knowledge is ultimately founded solely in ‘experience’.  In what has become known as ‘Hume’s fork’, he held that statements are divided up into two types: statements about ideas are necessary statements that are knowable a priori; and statements about the world, which are contingent and knowable a posteriori.

In modern philosophical terminology, members of the first group are known as analytic propositions and members of the latter as synthetic propositions.  Into the first class fall statements such as ‘2 + 2 = 4’, ‘all bachelors are unmarried’, and truths of mathematics and logic. Into the second class fall statements like ‘the sun rises in the morning’, and ‘the Earth has precisely one moon’.

Hume tried to prove that certainty does not exist in science. First, Hume notes that statements of the second type can never be entirely certain, due to the fallibility of our senses, the possibility of deception (for example, the modern ‘brain in a vat’ hypothesis) and other arguments made by philosophical skeptics.  It is always logically possible that any given statement about the world is false – hence the need for doubt and skepticism.

Hume formulated the ‘problem of induction’, which is the skeptical question of whether inductive reasoning leads to knowledge understood in the classic philosophical sense.  This problem focuses on the alleged lack of justification for generalising about the properties of a class of objects based on some number of observations of particular instances of that class (for example, the inference that ‘all swans we have seen are white, and therefore, all swans are white’, before the discovery of black swans in Western Australia).

Immanuel Kant (1724-1804) was (and still is) a major philosophical figure who tried to show the way beyond the impasse which modern philosophy had led to between rationalists such as Descartes and empiricists such as Hume.  Kant is widely held to have synthesised these two early modern philosophical traditions.  And yet he was also a skeptic, albeit of a different variety.  Kant thought that only knowledge gained from empirical science is legitimate, which is a forerunner of modern scientific skepticism.  He thought that metaphysics was illegitimate and largely speculative; and in that sense he was a philosophical skeptic.

Scientific skepticism

In 1924, the Spanish philosopher Miguel de Unamuno disputed the common dictionary definition of skepticism.  He argued that ‘skeptic does not mean him who doubts, but him who investigates or researches as opposed to him who asserts and thinks that he has found’.  Sounds familiar, doesn’t it?

Modern scientific skepticism is different from philosophical skepticism, and yet to some extent was influenced by the ideas of Pyrrho of Elis, David Hume, Immanuel Kant and Miguel de Unamuno.

Most skeptics in the English-speaking world see the 1976 formation of the Committee for the Scientific Investigation of Claims of the Paranormal (CSICOP) in the United States as the ‘birth of modern skepticism’.  (CSICOP is now called the Committee for Skeptical Inquiry – CSI).  However, CSICOP founder and philosophy professor Paul Kurtz has said that he actually modelled it after the Belgian Comité Para of 1949.  The Comité Para was partly formed as a response to a predatory industry of bogus psychics who were exploiting the grieving relatives of people who had gone missing during the Second World War.

paul-kurtz

Kurtz recommended that CSICOP focus on testable paranormal and pseudoscientific claims and to leave religious aspects to others.  CSICOP popularised the usage of the terms ‘skeptic’, ‘skeptical’ and ‘skepticism’ by its magazine, Skeptical Inquirer, and directly inspired the foundation of many other skeptical organizations throughout the world, including the Australian Skeptics in 1980.

Through the public activism of groups such as CSICOP and the Australian Skeptics, the term ‘scientific skepticism’ has come to symbolise an activist movement as well as a type of applied philosophy.

There are several definitions of scientific skepticism, but the two that I think are most apt are those by the Canadian skeptic Daniel Loxton and the American skeptic Steven Novella.

Daniel Loxton’s definition is ‘the practice or project of studying paranormal and pseudoscientific claims through the lens of science and critical scholarship, and then sharing the results with the public.’

Steven Novella’s definition is ‘scientific skepticism is the application of skeptical philosophy, critical thinking skills, and knowledge of science and its methods to empirical claims, while remaining agnostic or neutral to non-empirical claims (except those that directly impact the practice of science).’  By this exception, I think he means religious beliefs that conflict with science, such as creationism or opposition to stem cell research.

In other words, scientific skeptics maintain that empirical investigation of reality leads to the truth, and that the scientific method is best suited to this purpose.  Scientific skeptics attempt to evaluate claims based on verifiability and falsifiability and discourage accepting claims on faith or anecdotal evidence.  This is different to philosophical skepticism, although inspired by it.

References

Descartes, R. (1641) Meditations on First Philosophy: With Selections from the Objections and Replies, trans. and ed. John Cottingham, Cambridge: Cambridge University Press.

Hume, David.(1748) An Enquiry Concerning Human Understanding . Gutenberg Press.

Kant, Immanuel (1787) Critique of Pure Reason 2nd edition.  Cambridge: Cambridge University Press.

Loxton, Daniel. (2013) Why Is There a Skeptical Movement? (PDF). Retrieved 12 January 2017.

Novella, Steven (15 February 2013). ‘Scientific Skepticism, Rationalism, and Secularism’. Neurologica (blog). Retrieved 12 February 2017.

Russell, Bertrand. (1961) History of Western Philosophy. 2nd edition London: George Allen & Unwin.

Unamuno, Miguel de., (1924) Essays and soliloquies London: Harrap.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

Leave a comment

Filed under Essays and talks

The Stoic theory of universals, as compared to Platonic and Aristotelian theories

By Tim Harding

The philosophical problem of universals has endured since ancient times, and can have metaphysical or epistemic connotations, depending upon the philosopher in question.  I intend to show in this essay that both Plato’s and the Stoics’ theories of universals were not only derived from, but were ‘in the grip’ of their epistemological and metaphysical philosophies respectively; and were thus vulnerable to methodological criticism.  I propose to first outline the three alternative theories of Plato, Aristotle and the Stoics; and then to suggest that Aristotle’s theory, whilst developed as a criticism of Plato’s theory, stands more robustly on its own merits.

According to the Oxford Companion to Philosophy, particulars are instances of universals, as a particular apple is an instance of the universal known as ‘apple’.  (An implication of a particular is that it can only be in one place at any one time, which presents a kind of paradox that will be discussed later in this essay).   Even the definition of the ‘problem of universals’ is somewhat disputed by philosophers, but the problem generally is about whether universals exist, and if so what is their nature and relationship to particulars (Honderich 1995: 646, 887).

Philosophers such as Plato and Aristotle who hold that universals exist are known as ‘realists’, although they have differences about the ontological relationships between universals and particulars, as discussed in this essay.  Those who deny the existence of universals are known as ‘nominalists’.  According to Long and Sedley (1987:181), the Stoics were a type of nominalist known as ‘conceptualists’, as I shall discuss later.

Plato’s theory of universals (although he does not actually use this term) stems from his theory of knowledge.  Indeed, it is difficult to separate Plato’s ontology from his epistemology (Copleston 1962: 142).  In his Socratic dialogue Timaeus, Plato draws a distinction between permanent knowledge gained by reason and temporary opinion gained from the senses.

That which is apprehended by intelligence and reason is always in the same state; but that which is conceived by opinion with the help of sensation and without reason, is always in a process of becoming and perishing and never really is (Plato Timaeus 28a).

According to Copleston (1962: 143-146), this argument is part of Plato’s challenge to Protagoras’ theory that knowledge is sense-perception.  Plato argues that sense-perception on its own is not knowledge.  Truth is derived from the mind’s reflection and judgement, rather than from bare sensations.  To give an example of what Plato means, we may have a bare sensation of two white surfaces, but in order to judge the similarity of the two sensations, the mind’s activity is required.

Plato argues that true knowledge must be infallible, unchanging and of what is real, rather than merely of what is perceived.  He thinks that the individual objects of sense-perception, or particulars, cannot meet the criteria for knowledge because they are always in a state of flux and indefinite in number (Copleston 1962: 149).  So what knowledge does meet Plato’s criteria?  The answer to this question leads us to the category of universals.  Copleston gives the example of the judgement ‘The Athenian Constitution is good’.  The Constitution itself is open to change, for better or worse, but what is stable in this judgement is the universal quality of goodness.  Hence, within Plato’s epistemological framework, true knowledge is knowledge of the universal rather than the particular (Copleston 1962: 150).

We now proceed from Plato’s epistemology to his ontology of universals and particulars.  In terms of his third criterion of true knowledge being what is real rather than perceived, the essence of Plato’s Forms is that each true universal concept corresponds to an objective reality (Copleston 1962: 151).  The universal is what is real, and particulars are copies or instances of the Form.  For example, particulars such as beautiful things are instances of the universal or Form of Beauty.

…nothing makes a thing beautiful but the presence and participation of beauty in whatever way or manner obtained; for as to the manner I am uncertain, but I stoutly contend that by beauty all beautiful things become beautiful (Plato Phaedo, 653).

Baltzly (2106: F5.2-6) puts the general structure of Plato’s argument this way:

What we understand when we understand what justice, beauty, or generally F-ness are, doesn’t ever change.

But the sensible F particulars that exhibit these features are always changing.

So there must be a non-sensible universal – the Form of F-ness – that we understand when we achieve episteme (true knowledge).

Plato’s explanation for where this knowledge of Forms comes from, if not from sense-perceptions, is our existence as unembodied souls prior to this life (Baltzly 2106: F5.2-6).  To me, this explanation sounds like a ‘retrofit’ to solve a consequential problem with Plato’s theory and is a methodological weakness of his account.

Turning now to Aristotle’s theory, whilst he shared Plato’s realism about the existence of universals, he had some fundamental differences about their ontological relationship to particulars.  In terms of Baltzly’s abovementioned description of Plato’s general argument, Plato thought that the universal, F-ness, could exist even if there were no F particulars.  In direct contrast, Aristotle held that there cannot be a universal, F-ness, unless there are some particulars that are F.  For example, Aristotle thought that the existence of the universal ‘humanity’ depends on there being actual instances of particular human beings (Baltzly 2106: F5.2-8).

As for the reality of universals, Aristotle agreed with Plato that the universal is the object of science.  For instance, the scientist is not concerned with discovering knowledge about particular pieces of gold, but with the essence or properties of gold as a universal.  It follows that if the universal is not real, if it has no objective reality, there is no scientific knowledge.  By Modus Tollens, there is scientific knowledge, and if scientific knowledge is knowledge of reality; then to be consistent, the universal must also be real (Copleston 1962: 301-302).  (Whilst it is outside the scope of this essay to discuss whether scientific knowledge describes reality, to deny that there is any scientific knowledge would have major implications for epistemic coherence).

This is not to say that universals have ‘substance’, meaning that they consist of matter and form.  Aristotle maintains that only particulars have substance, and that universals exist as properties of particulars (Russell 1961: 176).  Russell quotes Aristotle as saying:

It seems impossible that any universal term should be the name of a substance. For…the substance of each thing is that which is peculiar to it, which does not belong to anything else; but the universal is common, since that is called universal which is such as to belong to more than one thing.

In other words, Aristotle thinks that a universal cannot exist by itself, but only in particular things.  Russell attempts to illustrate Aristotle’s position using a football analogy.  The game of football (a universal) cannot exist without football players (particulars); but the football players would still exist even if they never actually played football (Russell 1961: 176).

In almost complete contrast to both Plato and Aristotle, the Stoics denied the existence of universals, regarding them as concepts or mere figments of the rational mind.  In this way, the Stoics anticipated the conceptualism of the British empirical philosophers, such as Locke (Long and Sedley 1987:181).

The Stoic position is complicated by their being on the one hand materialists, and on the other holding a belief that there are non-existent things which ‘subsist’, such as incorporeal things like time and fictional entities such as a Centaur.  Their ontological hierarchy starts with the notion of a ‘something’, which they thought of as a proper subject of thought and discourse, whether or not it exists.  ‘Somethings’ can be subdivided into material bodies or corporeals, which exist; and incorporeals and things that are neither corporeal or incorporeal such as fictional entities, which subsist (Long and Sedley 1987:163-164).  Long and Sedley (1987:164) provide colourful examples of the distinction between existing and subsisting by saying:

There’s such a thing as a rainbow, and such a character as Mickey Mouse, but they don’t actually exist.

A significant exclusion from the Stoic ontological hierarchy is universals.  Despite the subsistence of a fictional character like Mickey Mouse, the universal man neither exists nor subsists, which is a curious inconsistency.  Stoic universals are dubbed by the neo-Platonist philosopher Simplicius (Long and Sedley 1987:180) as ‘not somethings’:

(2) One must also take into account the usage of the Stoics about generically qualified things—how according to them cases are expressed, how in their school universals are called ‘not-somethings’ and how their ignorance of the fact that not every substance signifies a ‘this Something’ gives rise to the Not-someone sophism, which relies on the form of expression.

Long and Sedley (1987:164) surmise from this analysis that for the Stoics, to be a ‘something’ is to be a particular, whether existent or subsistent.  Stoic ontology is occupied exclusively by particulars without universals.  In this way, universals are relegated to a metaphysical limbo, as far as the Stoics are concerned.  Nevertheless, they recognise the concept of universals as being not just a linguistic convenience but as useful conceptions or ways of thinking.  For this reason, Long and Sedley (1987:181-182) classify the Stoic position on universals as ‘conceptualist’, rather than simply nominalist.  (Nominalists think of universals simply as names for things that particulars have in common).  In a separate paper, Sedley (1985: 89) makes the distinction between nominalism and conceptualism using the following example:

After all the universal man is not identical with my generic thought of man; he is what I am thinking about when I have that thought.

One of the implications of a particular is that it can only be in one place at any one time, which gives rise to what was referred to above by Simplicius as the ‘Not-someone sophism’.  Sedley (1985: 87-88) paraphrases this sophism in the following terms:

If you make the mistake of hypostatizing the universal man into a Platonic abstract individual-if, in other words you regard him as ‘someone’-you will be unable to resist the following evidently  fallacious syllogism.  ‘If someone  is in Athens, he is not in Megara.  But man is in Athens. Therefore man is not in Megara.’ The improper step  here is clearly  the substitution of ‘man’ in the minor premiss for ‘someone’ in the major premiss. But it can be remedied only by the denial that the  universal man  is ‘someone’.  Therefore the universal man is not-someone.

Baltlzly (2016: F5.2-15) makes that point that the same argument would serve to show that time is a not-something, yet the Stoics inconsistently accept that time subsists as an incorporeal something.

I have attempted to show above that Plato and the Stoics are locked into their theories about universals as a result of their prior philosophical positions.  Although to argue otherwise could make them vulnerable to criticisms of inconsistency, they at the same time have methodological weaknesses that place them on shakier ground than Aristotelian realism.  However, I am also of the view that apart from these methodological issues, Aristotelian Realism is substantively a better theory than Platonic Realism or Stoic Conceptualism or Nominalism.  In coming to this view, I have relied mainly on the work of the late Australian Philosophy Professor David Armstrong.

Armstrong argues that there are universals which exist independently of the classifying mind.  No universal is found except as either a property of a particular or as a relation between particulars.  He thus rejects both Platonic Realism and all varieties of Nominalism (Armstrong 1978: xiii).

Armstrong describes Aristotelian Realism as allowing that particulars have properties and that two different particulars may have the very same property.  However, Aristotelian Realism rejects any transcendent account of properties, that is, an account claiming that universals exist separated from particulars (Armstrong 1975: 146).  Armstrong argues that we cannot give an account of universality in terms of particularity, as the various types of Nominalism attempt to do.  Nor can we give an account of particulars in terms universals, as the Platonic Realists do.  He believes that ‘while universality and particularity cannot be reduced to each other, they are interdependent, so that properties are always properties of a particular, and whatever is a particular is a particular having certain properties’ (Armstrong 1975: 146).

According to Armstrong, what is a genuine property of particulars is to be decided by scientific investigation, rather than simply a linguistic or conceptual classification (Armstrong 1975: 149).  Baltzly (2016: F5.2-18) paraphrases Armstrong’s argument this way:

  1. There are causes and effects in nature.

  2. Whether one event c causes another event e is independent of the classifications we make.

  3. Whether c causes e or not depends on the properties had by the things that figure in the events.

  4. So properties are independent of the classifications that we make and if this is so, then predicate nominalism and conceptualism are false.

Baltzly (2016: F5.2-18, 19) provides an illustration of this argument based on one given by Armstrong (1978: 42-43).  The effect of throwing brick against a window will result from the physical properties of the brick and window, in terms of their relative weight and strength, independently of how we name or classify those properties.  So in this way, I would argue that the properties of particulars, that is universals, are ‘real’ rather than merely ‘figments of the mind’ as the Stoics would say.

As for Platonic Realism, Armstrong argues that if we reject it then we must reject the view that there are any uninstantiated properties (Armstrong 1975: 149); that is, the view that properties are transcendent beings that exist apart from their instances, such as in universals rather than particulars.  He provides an illustration of a hypothetical property of travelling faster than the speed of light.  It is a scientific fact that no such property exists, regardless of our concepts about it (Armstrong 1975: 149).  For this reason, Armstrong upholds ‘scientific realism’ over Platonic Realism, which he thinks is consistent with Aristotelian Realism – a position that I support.

In conclusion, I have attempted to show in this essay that the Aristotelian theory of universals is superior to the equivalent theories of both Plato and the Stoics.  I have argued this in terms of the relative methodologies as well as the substantive arguments.  I would choose the most compelling argument to be that of epistemic coherence regarding scientific knowledge, that is, that the universal is the object of science.  It follows that if the universal is not real, if it has no objective reality, then there is no scientific knowledge.  There is scientific knowledge, and if scientific knowledge is knowledge of reality; then to be consistent, the universal must also be real.

Bibliography

Armstrong, D.M. ‘Towards a Theory of Properties: Work in Progress on the Problem of Universals’ Philosophy, (1975), Vol.50 (192), pp.145-155.

Armstrong, D.M. ‘Nominalism and Realism’ Universals and Scientific Realism Volume 1, (1978) Cambridge: Cambridge University Press.

Baltzly, D. ATS3885: Stoic and Epicurean Philosophy Unit Reader (2016). Clayton: Faculty of Arts, Monash University.

Copleston, F. A History of Philosophy Volume 1: Greece and Rome (1962) New York: Doubleday.

Honderich, T. Oxford Companion to Philosophy (1995) Oxford: Oxford University Press.

Long A. A. and Sedley, D. N. The Hellenistic Philosophers, Volume 1 (1987). Cambridge: Cambridge University Press.

Plato, Phaedo in The Essential Plato trans. Benjamin Jowett, Book-of-the-Month Club (1999).

Plato, Timaeus in The Internet Classics Archive. http://classics.mit.edu//Plato/timaeus.html
Viewed 2 October 2016.

Russell, B. History of Western Philosophy. 2nd edition (1961) London: George Allen & Unwin.

Sedley, D. ‘The Stoic Theory of Universals’ The Southern Journal of Philosophy (1985) Vol. XXIII. Supplement.

Leave a comment

Filed under Essays and talks

The Fallacy of Faulty Risk Assessment

by Tim Harding

(An edited version of this essay was published in The Skeptic magazine, September 2016, Vol 36 No 3)

Australian Skeptics have tackled many false beliefs over the years, often in co-operation with other organisations.  We have had some successes – for instance, belief in homeopathy finally seems to be on the wane.  Nevertheless, false beliefs about vaccination and fluoridation just won’t lie down and die – despite concerted campaigns by medical practitioners, dentists, governments and more recently the media.  Why are these beliefs so immune to evidence and arguments?

There are several possible explanations for the persistence of these false beliefs.  One is denialism – the rejection of established facts in favour of personal opinions.  Closely related are conspiracy theories, which typically allege that facts have been suppressed or fabricated by ‘the powers that be’, in an attempt by denialists to explain the discrepancies between their opinions and the findings of science.  A third possibility is an error of reasoning or fallacy known as Faulty Risk Assessment, which is the topic of this article.

Before going on to discuss vaccination and fluoridation in terms of this fallacy, I would like to talk about risk and risk assessment in general.

What is risk assessment?

Hardly anything we do in life is risk-free. Whenever we travel in a car or even walk along a footpath, most people are aware that there is a small but finite risk of being injured or killed.  Yet this risk does not keep us away from roads.  We intuitively make an informal risk assessment that the level of this risk is acceptable in the circumstances.

In more formal terms, ‘risk’ may be defined as the probability or likelihood of something bad happening multiplied by the resulting cost/benefit ratio if it does happen.  Risk analysis is the process of discovering what risks are associated with a particular hazard, including the mechanisms that cause the hazard, then estimating the likelihood that the hazard will occur and the consequences if it does occur.

Risk assessment is the determination of the acceptability of risk using two dimensions of measurement – the likelihood of an adverse event occurring; and the severity of the consequences if it does occur, as illustrated in the diagram below.  (This two-dimensional risk assessment is a conceptually useful way of ranking risks, even if one or both of the dimensions cannot be measured quantitatively).

risk-diagram

By way of illustration, the likelihood of something bad happening could be very low, but the consequences could be unacceptably high – enough to justify preventative action.  Conversely, the likelihood of an event could be higher, but the consequences could low enough to justify ‘taking the risk’.

In assessing the consequences, consideration needs to be given to the size of the population likely to be affected, and the severity of the impact on those affected.  This will provide an indication of the aggregate effect of an adverse event.  For example, ‘high’ consequences might include significant harm to a small group of affected individuals, or moderate harm to a large number of individuals.

A fallacy is committed when a person either focuses on the risks of an activity and ignores its benefits; and/or takes account one dimension of risk assessment and overlooks the other dimension.

To give a practical example of a one-dimensional risk assessment, the desalination plant to augment Melbourne’s water supply has been called a ‘white elephant’ by some people, because it has not been needed since the last drought broke in March 2010.  But this criticism ignores the catastrophic consequences that could have occurred had the drought not broken.  In June 2009, Melbourne’s water storages fell to 25.5% of capacity, the lowest level since the huge Thomson Dam began filling in 1984.  This downward trend could have continued at that time, and could well be repeated during the inevitable next drought.

wonthaggi

Melbourne’s desalination plant at Wonthaggi

No responsible government could afford to ‘take the risk’ of a major city of more than four million people running out of water.  People in temperate climates can survive without electricity or gas, but are likely to die of thirst in less than a week without water, not to mention the hygiene crisis that would occur without washing or toilet flushing.  The failure to safeguard the water supply of a major city is one of the most serious derelictions of government responsibility imaginable.

Turning now to the anti-vaccination and anti-fluoridation movements, they both commit the fallacy of Faulty Risk Assessment.  They focus on the very tiny likelihood of adverse side effects without considering the major benefits to public health from vaccination and the fluoridation of public water supplies, and the potentially severe consequences of not vaccinating or fluoridating.

Vaccination risks

The benefits of vaccination far outweigh its risks for all of the diseases where vaccines are available.  This includes influenza, pertussis (whooping cough), measles and tetanus – not to mention the terrible diseases that vaccination has eradicated from Australia such as smallpox, polio, diphtheria and tuberculosis.

As fellow skeptic Dr. Rachael Dunlop puts it:  ‘In many ways, vaccines are a victim of their own success, leading us to forget just how debilitating preventable diseases can be – not seeing kids in calipers or hospital wards full of iron lungs means we forget just how serious these diseases can be.’

No adult or teenager has ever died or become seriously ill in Australia from the side effects of vaccination; yet large numbers of people have died from the lack of vaccination.  The notorious Wakefield allegation in 1998 of a link between vaccination and autism has been discredited, retracted and found to be fraudulent.  Further evidence comes from a recently published exhaustive review examining 12,000 research articles covering eight different vaccines which also concluded there is no link between vaccines and autism.

According to Professor C Raina MacIntyre of UNSW, ‘Influenza virus is a serious infection, which causes 1,500 to 3,500 deaths in Australia each year.  Death occurs from direct viral effects (such as viral pneumonia) or from complications such as bacterial pneumonia and other secondary bacterial infections. In people with underlying coronary artery disease, influenza may also precipitate heart attacks, which flu vaccine may prevent.’

In 2010, increased rates of high fever and febrile convulsions were reported in children under 5 years of age after they were vaccinated with the Fluvax vaccine.  This vaccine has not been registered for use in this age group since late 2010 and therefore should not be given to children under 5 years of age. The available data indicate that there is a very low risk of fever, which is usually mild and transient, following vaccination with the other vaccine brands.  Any of these other vaccines can be used in children aged 6 months and older.

Australia was declared measles-free in 2005 by the World Health Organization (WHO) – before we stopped being so vigilant about vaccinating and outbreaks began to reappear.  The impact of vaccine complacency can be observed in the 2015 measles epidemic in Wales where there were over 800 cases and one death, and many people presenting were of the age who missed out on MMR vaccination following the Wakefield scare.

After the link to autism was disproven, many anti-vaxers shifted the blame to thiomersal, a mercury-containing component of relatively low toxicity to humans.  Small amounts of thiomersal were used as a preservative in some vaccines, but not the MMR vaccine.  Thiomersal was removed from all scheduled childhood vaccines in 2000.

In terms of risk assessment, Dr. Dunlop has pointed out that no vaccine is 100% effective and vaccines are not an absolute guarantee against infection. So while it’s still possible to get the disease you’ve been vaccinated against, disease severity and duration will be reduced.  Those who are vaccinated have fewer complications than people who aren’t.  With pertussis (whooping cough), for example, severe complications such as pneumonia and encephalitis (brain inflammation) occur almost exclusively in the unvaccinated.  So since the majority of the population is vaccinated, it follows that most people who get a particular disease will be vaccinated, but critically, they will suffer fewer complications and long-term effects than those who are completely unprotected.

Fluoridation risks

Public water fluoridation is the adjustment of the natural levels of fluoride in drinking water to a level that helps protect teeth against decay.  In many (but not all) parts of Australia, reticulated drinking water has been fluoridated since the early 1960s.

The benefits of fluoridation are well documented.  In November 2007, the NHMRC completed a review of the latest scientific evidence in relation to fluoride and health.  Based on this review, the NHMRC recommended community water fluoridation programs as the most effective and socially equitable community measure for protecting the population from tooth decay.  The scientific and medical support for the benefits of fluoridation certainly outweighs the claims of the vocal minority against it.

Fluoridation opponents over the years have claimed that putting fluoride in water causes health problems, is too expensive and is a form of mass medication.  Some conspiracy theorists go as far as to suggest that fluoridation is a communist plot to lower children’s IQ.  Yet, there is no evidence of any adverse health effects from the fluoridation of water at the recommended levels.  The only possible risk is from over-dosing water supplies as a result of automated equipment failure, but there is inline testing of fluoride levels with automated water shutoffs in the remote event of overdosing.  Any overdose would need to be massive to have any adverse effect on health.  The probability of such a massive overdose is extremely low.

Tooth decay remains a significant problem. In Victoria, for instance, more than 4,400 children under 10, including 197 two-year-olds and 828 four-year-olds, required general anaesthetic in hospital for the treatment of dental decay during 2009-10.  Indeed, 95% of all preventable dental admissions to hospital for children up to nine years old in Victoria are due to dental decay. Children under ten in non-optimally fluoridated areas are twice as likely to require a general anaesthetic for treatment of dental decay as children in optimally fluoridated areas.

As fellow skeptic and pain management specialist Dr. Michael Vagg has said, “The risks of general anaesthesia for multiple tooth extractions are not to be idly contemplated for children, and far outweigh the virtually non-existent risk from fluoridation.”  So in terms of risk assessment, the risks from not fluoridating water supplies are far greater than the risks of fluoridating.

Implications for skeptical activism

Anti-vaxers and anti-fluoridationists who are motivated by denialism and conspiracy theories tend to believe whatever they want to believe, and dogmatically so.  Thus evidence and arguments are unlikely to have much influence on them.

But not all anti-vaxxers and anti-fluoridationists fall into this category.  Some may have been misled by false information, and thus could possibly be open to persuasion if the correct information is provided.

Others might even be aware of the correct information, but are assessing the risks fallaciously in the ways I have described in this article.  Their errors are not ones of fact, but errors of reasoning.  They too might be open to persuasion if education about sound risk assessment is provided.

I hope that analysing the false beliefs about vaccination and fluoridation from the perspective of the Faulty Risk Assessment Fallacy has provided yet another weapon in the skeptical armoury against these false beliefs.

References

Rachael Dunlop (2015) Six myths about vaccination – and why they’re wrong. The Conversation, Parkville.

C Raina MacIntyre (2016) Thinking about getting the 2016 flu vaccine? Here’s what you need to know. The Conversation, Parkville.

Mike Morgan (2012) How fluoride in water helps prevent tooth decay.  The Conversation, Parkville.

Michael Vagg (2013) Fluoride conspiracies + activism = harm to children. The Conversation, Parkville.

 Government of Victoria (2014) Victorian Guide to Regulation. Department of Treasury and Finance, Melbourne.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

Leave a comment

Filed under Essays and talks

Epicurean free will

by Tim Harding

Epicurus’ philosophy of mind is perhaps best explained in terms of Epicurean physics.  Epicurus was a materialist who thinks that the natural world is all that exists, so his physics is a general theory of what exists and its nature, including human bodies and minds (O’Keefe 2010: 11-12).

Epicureans thought that there are only two things that exist per se – atoms and void.  Atoms are the indivisible, most basic particles of matter, which move through void, which is empty space (O’Keefe 2010: 11-12).  Objects as we know them are compounds of atoms, and their various natures are explicable in terms of the different properties or attributes of their constituent atoms (Baltzly 2016: 02-1).

When Epicurus refers to the ‘soul’ he means what we today refer to as the mind, so ‘mind’ is the term I shall use here.  He identifies the mind with a compound of four types of atoms – air, heat, wind and a fourth nameless substance (Long and Sedley 1987: 14C).  Because the mind is composed of atoms, it must be corporeal – only the void is incorporeal (Long and Sedley 1987: 14A).  The mind is a part of the body (located in the chest), responsible for sensation, imagination, emotion and memory (Long and Sedley 1987: 14A, 14B, 15D).  Other functions belong to the ‘spirit’ which provides sensory input to, and carries out the instructions of the mind throughout the body (Long and Sedley 1987: 14B).

According to O’Keefe (2010: 62-63), another Epicurean argument for believing that mind is corporeal is as follows:

Premise 1: The mind moves the body and is moved by the body.

Premise 2: Only bodies can move and be moved by other bodies.

Conclusion: Therefore, the mind is a body.

Long and Sedley (1987:107) identify Epicurus as arguably the first philosopher to recognise what we now know as the philosophical Problem of Free Will.  This problem is if it has been causally necessary we should act as we do, it cannot be up to us, therefore we cannot be morally responsible for our actions (Long and Sedley 1987: 20A).  On the other hand, Epicurus notes that ‘we rebuke, oppose and reform each other as if the responsibility lay also in ourselves’ [Long and Sedley 1987: 20C(2)].

According to Cicero, ‘Epicurus thinks that necessity of fate is avoided by the swerve of atoms’ [Long and Sedley 1987: 20E(2)].  Baltzly explains this ‘atomic swerve’ as atoms moving a minimal distance sideways, apparently for no reason at all, from time to time.  This swerve from their natural downward motion results in atomic collisions (Baltzly 2016: F2.2-14).  Although this swerve is not explicitly mentioned by Epicurus himself, Cicero writes that:

‘Epicurus’ reason for introducing this theory was his fear that, if the atom’s motion was always the result of natural and necessary weight, we would have no freedom, since the mind would be moved in whatever way it was compelled by the movement of atoms’ [Long and Sedley 1987: 20E(3)].

Lucretius presents an argument that the atomic swerve enables free will (Long and Sedley 1987: 20F).  O’Keefe (2010: 74-75) states this argument in the following form:

Premise 1: If the atoms did not swerve, there would not be ‘free will’.

Premise 2: There is free will.

Conclusion: Therefore, atoms swerve.

This argument is logically valid, so if the premises are true the conclusion must be true.  Lucretius spends most of this passage trying to show that Premise 2 is true.  However, even if Premise 2 is true, we do not know that Premise 1 is true.  The atomic swerve introduces a slight element of indeterminacy, but this swerve does not necessarily entail free will, since no mechanism is given to explain the connection between these two concepts.  Indeed, Annas (1991: 87) argues that there is a fundamental problem in thinking of human motivation in terms of only the motion of atoms.  She thinks that occurrence of atomic swerves in ordinary macro-objects has no effect on them (Annas 1991: 96-97).  For this reason, I do not think that the introduction of random atomic swerves solves the Problem of Free Will.

Sedley (1987: 107) agrees that taken in isolation such a solution is ‘notoriously unsatisfactory’.  He offers an alternative explanation in terms of ‘development’ which contributes psychological autonomy and which is distinct from the atoms in a kind of differential or transcendent way (Long and Sedley 1987: 107-18).  In other words, these distinct developments are psychological rather than physical properties of the mind.  In particular, the development of consciousness which is an ‘emergent’ property of complex atomic systems like human beings (Baltzly 2016: F2.2 – 17).

In a later paper, Sedley provides some more detail on what he means by emergent properties:

‘I take Epicurus to be sketching some sort of theory of radically emergent properties.  Matter in certain complex states can, he holds, acquire entirely new , non-physical properties, not governed by the laws of physics’ (Sedley 1988: 323-324).

It is important to note that Sedley is attempting here to make a connection between free will and the atomic swerve.  As Baltzly (2016: F2.2 – 18) puts it, the swerve means that not every motion of the atoms which make up our bodies is determined by those atoms themselves.  Baltzly thinks that the swerve does not introduce an element of randomness or indeterminacy into our free choices:

‘Rather, the swerve leaves a gap where the psychological properties of my soul [mind] can cause something to happen where behaviour of the atoms that make up my soul [mind] leave it open what will happen’ (Baltzly 2016: F2.2 – 18).

My own view is that Sedley and Baltzly provide a plausible explanation of the connection between Epicurus’ atomic swerve and free will.  It is possible that consciousness is an emergent psychological property of the material mind.  Free will could be seen as a manifestation of consciousness.  Whilst we cannot yet fully explain what consciousness is and how is works, there is little doubt that consciousness exists.  If consciousness can exist, then so can free will.  However, where I part company with Sedley is that I find Epicurus’ theory of the atomic swerve unconvincing.  Neither Epicurus nor his followers provide any evidence for the existence of the atomic swerve.  It has been postulated as a kind of ‘retrofit’ in an attempt to solve the problem of free will by introducing an imaginary element of indeterminacy.  I think that Sedley’s idea of emergence could help to explain free will even in the absence of the Epicurean atomic swerve.

I would now like to draw towards a conclusion about Epicurus’ philosophy of mind, by comparing it with the theories of his competitors.  According to O’Keefe (2010: 80-83), these were mainly Carneades (214-129BCE) the head of the skeptical academy; and Chrysippus (c.280-206BCE) the third head of the Stoic school.

The most relevant criticism of Carneades is that positing a motion without a cause, like the atomic swerve, would be beside the point in solving the problem of free will (O’Keefe 2010: 82).  Carneades’ solution is to say that all events, including human actions, have causes   These actions are the result of ‘voluntary motions of the mind’ rather than external causes.  He thinks that there is no reason to posit, in addition, a fundamental indeterminism like the atomic swerve (O’Keefe 2010: 82).  In this way, Carneades was perhaps the forerunner of a compatibilist solution to the problem of free will, allowing both determinism and voluntary choices to co-exist.

Chrysippus criticises Epicurus from the opposite direction.  He shows that causal determinism does not make the future inevitable in a manner that renders action or deliberation futile.  In this way, determinism is compatible with human agency (O’Keefe 2010: 82).

In conclusion, I think that Sedley, Carneades and Chrysippus have pointed the way towards a compatibilist solution to the problem of free will, that does not depend on the dubious Epicurean postulation of the atomic swerve.  I therefore think that their approaches to this problem are more compelling than those of Epicurus.

Bibliography

Annas, J. ‘Epicurus’ Philosophy of Mind’ Companions to Ancient Thought: 2 Psychology, S. Everson, ed. (1991) Cambridge: Cambridge University Press.

Baltzly, D. ATS3885: Stoic and Epicurean Philosophy Unit Reader (2016). Clayton: Faculty of Arts, Monash University.

Long A. A. and Sedley, D. N. The Hellenistic Philosophers, Volume 1 (1987). Cambridge: Cambridge University Press.

O’Keefe, T. Epicureanism. (2010). Berkeley: University of California Press.

Sedley D. ‘Epicurean Anti-Reductionism’ in Jonathan Barnes Mario Mignucci (ed.), Matter and Metaphysics. Bibliopolis 295–327 (1988).

Follow me on Academia.edu

Copyright notice: © All rights reserved. Except for personal use or as permitted under the Australian Copyright Act, no part of this website may be reproduced, stored in a retrieval system, communicated or transmitted in any form or by any means without prior written permission. All inquiries should be made to the copyright owner, Tim Harding at tim.harding@yandoo.com, or as attributed on individual blog posts.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

Leave a comment

Filed under Essays and talks

The Medieval Agrarian Economy

by Tim Harding

This striking image depicts the three main classes of medieval society – the clergy, the knights and the peasantry.[1]  Tellingly, the cleric and the knight are shown talking to each other; but the peasant is excluded from the conversation.  Even though the peasants comprised over 90% of the population, they were in many ways marginalized socially and economically.  So who were these peasants and what was their daily life like?

striking

Source of image: Wikimedia Commons

The term ‘peasant’ essentially means a traditional farmer of the Middle Ages, although in everyday language it has come to mean a lower class agricultural labourer.  In the Central Middle Ages, that is the period from 1000 to 1300CE, European peasants were divided into four classes according to their legal status and their relationship to the land they farmed.  These classes were slave, serf, free tenant or land owner.  The first two classes were usually much poorer than the second two.

There were several factors that influenced the lives of peasants during this period.  The reciprocal benefits of agricultural labour and warrior protection gave rise to closely settled manorial and feudal communities.[2]  More land was brought under cultivation by the communal clearing of forests, draining of swamps and the building of levees or dykes.[3] (Drafting note: I’ll leave it to Corey to provide text about the relevant social and legal relationships).  

The invention of a heavier wheeled plow enabled deeper cultivation of soils, including the burying of green manure from fallow land and also stubble from previous crops.  The deeper furrows also protected seed from wind and birds.[4]

plough

Source of image: Wikimedia Commons

There was also a period of warmer temperatures, milder winters and higher rainfall at this time, resulting in longer growing seasons.[5]  Another important factor was the replacement of the Roman two-field rotation system by a more efficient three-field system, enabling two-thirds of the land to be under cultivation at any one time, instead of only half the land.  This image shows the three cropping fields (West, South and East) of a typical rural community, with the remaining quarter devoted to pasture, the Manor house and Church.[6]

rural community

Source of image: Bennett, Judith M., Medieval Europe – A Short History
(New York: McGraw-Hill, 2011). p. 142.

Interestingly, the typical length of a plow-strip was 220 yards, called a furlong (a word still used in horse racing today).  The width of a plow-strip was a rod, and a rectangle of 4 rods by one furlong became an acre.[7] (Four rods later became a ‘chain’ of 22 yards, so an acre was an area one furlong by one chain).

The resulting increases in agricultural yields raised farm production above subsistence levels for the first time in centuries.   These surpluses not enabled not only trade, but also the storage of produce such as oats for the feeding of horses.  This in turn enabled the replacement of plow-pulling oxen by horses that required less pasture that could be reallocated to cropping.  Horses also moved and turned faster than oxen, resulting in even more efficiencies.[8]

Crop yields for wheat improved to an estimated four times the quantity of grain sown.  Typically, one quarter of the yield was reserved for the next planting, one or two quarters went to the lord of the manor as rent, and the remainder was either consumed as bread or beer, stored for the winter or sold at local markets.[9]

Few peasants could afford meat to eat – they mainly lived on bread, beer and vegetables grown by women and children in small cottage gardens, plus eggs from chickens and milk from cows and goats.  Those living in coastal areas also ate fish. [10]

 Bibliography

Backman, Clifford R., The Worlds of Medieval Europe (Oxford: Oxford University Press, 2015).

Bennett, Judith M., Medieval Europe – A Short History (New York: McGraw-Hill, 2011).

Endnotes

[1] Bennett, Judith M., Medieval Europe – A Short History (New York: McGraw-Hill, 2011) p.135.

[2] Backman, Clifford R., The Worlds of Medieval Europe (Oxford: Oxford University Press, 2015) p.215

[3] Bennett, p.140.

[4] Backman, p.218.

[5] Bennett, p.139.

[6] Bennett, p.140-142.

[7] Backman, p.217.

[8] Backman, p.218.

[9] Backman, p.219.

[10] Backman, p.220.

Leave a comment

Filed under Essays and talks

Did Descartes think that animals have feelings?

by Tim Harding

It is a common misconception that Descartes held the view that because animals cannot think, they have no feelings and do not suffer pain.  In 1952, this view was described by the Scottish philosopher and psychologist Norman Kemp Smith as a ‘monstrous thesis’ (Cottingham 1978: 554-556).  In this essay, I intend to examine two questions – firstly, whether Descartes actually held this view and secondly, whether this view is entailed by his other views about animal minds.  My answer is essentially that whilst the text references are somewhat unclear on this specific point, it is unlikely that Descartes held this view or that it was entailed by his other related views.

descartes

Rene Descartes (1596 – 1650CE)

Part of the problem in discussing these questions is a lack of clarity amongst Descartes’ objectors (and even Descartes himself) in the meanings of key terms such as ‘consciousness’, ‘self-consciousness’, ‘thought’, ‘awareness’, ‘feelings’ and ‘sensations’.  In an attempt to clarify the issues, Cottingham (1978: 551) helpfully suggests that the views attributed to Descartes be broken down in to a number of distinct propositions:

(1)  Animals are machines.

(2)  Animals are automata.

(3)  Animals do not think.

(4)  Animals have no language.

(5)  Animals have no self-consciousness.

(6)  Animals have no consciousness.

(7)  Animals are totally without feeling.

Cottingham (1978: 552) argues that whilst Descartes advocated propositions (1) to (5), there is no evidence that he supported Proposition (7).  Nor is Proposition (7) entailed by the earlier propositions (Cottingham 1978: 554-556).  I will return to Proposition (7) later, after I have discussed the definitions of some key terms and the earlier propositions.

Proposition (1) is not asserted by Descartes in this explicit form; but Cottingham (1978: 552) argues that this is what Descartes means in Part V of his Discourse on Method, where he says that the body may be regarded ‘like a machine..’.  It is important to note that for Descartes, the human body is a machine in the same sense as an animal body.  This view is part of Descartes’ general scientific ‘mechanism’ where all animal behaviour is explainable in terms of physiological laws (Cottingham 1978: 552).

The definition of ‘automaton’ in Proposition (2) is significant, as it has led to some confusion in the descriptions of Descartes’ views.  Cottingham (1978: 553) argues that the primary Webster dictionary definition of ‘automaton’ is ‘a machine that is relatively self-operating’ (which is the Ancient Greek meaning of the ‘auto’ prefix)  It does not entail the absence or incapability of feeling, as some of Descartes’ critics have alleged (Cottingham 1978: 553).  What Descartes is saying is that the complex sequence of movements of machines, such as the moving statues found at the time in some of the royal fountains, could all be explained in terms of internal mechanisms such as cogs, levers and the like.  Descartes’ point here is that the mere complexity of animal movements is no more a bar to explanation of their behaviour than is the case with the movements of these fountain statues (Cottingham 1978: 553).

Regarding Proposition (3), a crucial and central difference between animals and human beings for Descartes is that animals do not think.  In a letter to the English philosopher Henry More dated 5 February 1649, Descartes says that ‘there is no prejudice to which we are all more accustomed from our earliest years than the belief that the dumb animals think’.  He also says that they do not have a mind; they lack reason; and they do not have a rational soul (Cottingham 1978: 554).  Descartes defined ‘thought’ in his Second Replies to the Meditations as follows: ‘Thought is a word that covers everything that exists in us in such a way that we are immediately conscious of it. Thus all the operations of will, intellect, imagination, and of the senses are thoughts’ (Radner and Radner, 1989: 22).  Descartes’ inclusion of the senses in this definition is ambiguous, as I will discuss later.

For Descartes, Proposition (3) is entailed by Proposition (4) claiming the absence of language in animals.  In a letter to the Marquess of Newcastle dated 23 November 1646, Descartes makes the point that the utterances of animals are never what the modern linguist Chomsky calls ‘stimulus free’ – they are always geared to and elicited by external factors (Cottingham 1978: 555; Radner and Radner, 1989: 41).  Descartes explains in his letter that the words of parrots do not count as language because they are not ‘relevant’ to the particular situation.  By contrast, even the ravings of insane persons are ‘relevant to particular topics’ though they do not ‘follow reason’ (Radner and Radner, 1989: 45).  This brings us to what is known as Descartes’ ‘language test’ – the ability to put words together in different ways that are appropriate to a wide variety of situations (Radner and Radner, 1989: 41).

In an attempt to overcome certain objections and counter examples, Descartes later modifies his language test to claim that animals never communicate anything pertaining to ‘pure thought’, which he means unaccompanied by any corporeal process or functions of the body (Radner and Radner, 1989: 48).  This modification is what is known as Descartes’ ‘action test’, which has been stated by Radner and Radner (1989: 50) as:

‘In order to determine whether a creature of type A is acting through reason, you compare its performance with that of creatures that do act through reason.  If A’s performance falls short of B’s, where B is a creature that acts through reason, then A does not act through reason but only from the disposition of its organs.  The B always stands for human beings because they are the only beings known for sure to have reason.  Only in the human case do we have direct access to the reasoning process.’

As for Propositions (5) and (6), whilst Descartes provides an explicit definition of ‘thought’ he does not offer one of ‘consciousness’, let alone ‘self-consciousness’ (Radner and Radner, 1989: 22-25).  Yet he inextricably links thought to consciousness in the Fourth Replies when he says ‘we cannot have any thought of which we are not aware at the very moment when it is in us’.  This implies that for Descartes, consciousness is not the act of thinking, but our awareness of our acts of thinking (Radner and Radner, 1989: 22-25).  This raises some complex issues regarding an infinite regression of thoughts (Radner and Radner, 1989: 22-25); but I need not discuss those issues for my current purposes.   Radner and Radner (1989: 30) suggest that self-consciousness is not necessarily the same thing as consciousness. It is the awareness of self, that it is one’s self that is having conscious thoughts.

With respect to Proposition (7), Cottingham (1978: 556-557) argues that Descartes did not commit himself to the view that animals do not have feelings or sensations.  He quotes from Descartes 1649 letter to More, where he says that the sounds made by livestock and companion animals are not genuine language , but are ways of ‘communicating to us…their natural impulses of anger, fear, hunger and so on’.  In the same letter, Descartes writes: ‘I should like to stress that I am talking of thought, not of…sensation; for …I deny sensation to no animal, in so far as it depends on a bodily organ.  Cottingham also quotes from Descartes 1646 letter to Newcastle, where he wrote: ‘If you teach a magpie to say good-day to its mistress when it sees her coming, all you can possibly have done is to make the emitting of this word the expression of one of its feelings.’  In other words, Descartes denies in these letters that animals think, but not that they feel (Cottingham 1978: 557).

Notwithstanding the apparent vindication of Descartes in the text of these letters, Cottingham (1978: 557) next argues that Proposition (7) is consistent with Descartes dualism.  Since an animal has no mind or soul, it follows that that it must belong wholly in the extended divisible world of corporeal substances.  Cottingham (1978: 557) thinks that this must be the authentic Cartesian position, presumably because the central importance of dualism to Cartesian metaphysics.  On the other hand, I would argue that a lack of Cartesian thought does not entail a lack of feeling or sensation, as I discuss under Proposition (3) below.

The next question to consider is whether any of Propositions (1) to (6) are true; and if so, whether Proposition (7) is entailed by any of these earlier propositions that are true.

With respect to Proposition (1) I would argue that if the human body is a machine and humans have feelings, then it does not follow from this proposition alone that because animals are machines, they do not have feelings.  Similarly, even if Proposition (2) is true, it does not follow from the definition of automaton that animals do not have feelings either (Cottingham 1978: 553).

Proposition (3) is probably the area of greatest contention.  Radner and Radner (1989: 13) cite empirical evidence as far back as Aristotle indicating at least the possibility of thought by animals.  Aristotle cites the nest-building behaviour of swallows, where they mix mud and chaff.  If they run short of mud, they douse themselves with water and roll in the dust.  He also reports that a mother nightingale has been observed to give singing lessons to her young (Radner and Radner, 1989: 13).  More recently, there is a video on YouTube of a mother Labrador teaching her puppy how to go down stairs.[1]  There is another video of a crow solving a complex puzzle that most human children would have difficulty with.[2] Whilst nest building and singing teaching are arguably instinctive bird behaviours, dogs teaching puppies about stairs and crows solving complex puzzles are less likely to be instinctive.  They indicate the possibility of animals planning things in their minds.

Cottingham argues that even if Proposition (3) is true, it does not follow that Descartes is committed to a position that animals do not have feelings.  This is because Descartes separates feelings and sensations from thinking – for example a level of feeling or sensation that fall short of reflective awareness (Cottingham 1978: 555-556).  Radner and Radner suggest that the word ‘sensation’ is ambiguous for Descartes.  On the one hand, it could refer to the corporeal process of the transmission of nerve impulses to the brain; yet on the other hand it can also refer to the mental awareness that is associated with the corporeal process (Radner and Radner 1989: 22).

Another area of contention is in relation to Proposition (4).  Gassendi objected that Descartes was being unfair to animals in judging ‘language’ in only human terms.  He suggested that animals could have languages of their own that we do not understand (Radner and Radner 1989: 45).  I would add that human sign language illustrates that language need not be exclusively vocal.  Radner and Radner suggest that the natural cries and gestures of animals can be appropriate to the situation and can communicate useful information to other animals.  For example, a Thomson’s gazelle, seeing a predator lurking in the distance, assumes an alert posture and gives a short snort.  The other gazelles within hearing distance immediately stop grazing and look in the same direction.  The message is not just ‘I’m scared’ but it conveys a warning to look up and over in this direction (Radner and Radner 1989: 45).

Thomson's gazelles

Thomson’s gazelles

Radner and Radner (1989: 102-103) argue that neither the language test nor the action test lead to the conclusion that animals lack consciousness.  Either animals pass the language test or it is not a test of thought in the Cartesian sense.  The Radners argue that even if we were to grant that action test shows that animals fail to act through reason it still does not establish that they lack all modes of Cartesian thought (Radner and Radner 1989: 103).  I would also argue that Descartes modification of the language test to an ‘action test’ results in a proposition similar to Proposition (3) about thinking which I have already discussed.

In conclusion, I have tried to clarify the various propositions and key terms involved in the allegation that Descartes believed that animals do have feelings or sensations.  I have supported Cottingham’s view that the relevant texts by Descartes do not substantiate this allegation.  I have also supported Cottingham’s view that Propositions (1) to (6) do not entail Proposition (7), including by the use of some recent empirical evidence.  However, I do not support Cottingham’s view that Descartes’ dualism is inconsistent with his views about animal minds.

 BIBLIOGRAPHY

Cottingham, J., ‘A Brute to the Brutes?  Descartes’ Treatment of Animals’, Philosophy 53 (1978), pp. 551-59.

Radner, D., and Radner, M., (1989) Animal Consciousness. Buffalo, Prometheus Books.

[1] https://www.youtube.com/watch?v=Ht5dFBMgOGs

[2] https://www.youtube.com/watch?v=uNHPh8TEAXM

Leave a comment

Filed under Essays and talks

The Birth of Experimental Science

by Tim Harding

(An edited version of this essay was published in The Skeptic magazine,
June 2016, Vol 36, No. 2, under the title ‘Out of the Dark’).

To the ancient Greeks, science was simply the knowledge of nature.  The acquisition of such knowledge was theoretical rather than experimental.  Logic and reason were applied to observations of nature in attempts to discover the underlying principles influencing phenomena.

After the Dark Ages, the revival of classical logic and reason in Western Europe was highly significant to the development of universities and subsequent intellectual progress.  It was also a precursor to the development of empirical scientific methods in the thirteenth century, which I think were even more important because of the later practical benefits of science to humanity.  The two most influential thinkers in development of scientific methods at this time were the English philosophers Robert Grosseteste (1175-1253) and Roger Bacon (c.1219/20-c.1292). (Note: Roger Bacon is not to be confused with Francis Bacon).

Apart from the relatively brief Carolingan Renaissance of the late eighth century to the ninth century, intellectual progress in Western Europe generally lagged behind that of the Byzantine and Islamic parts of the former Roman Empire.[1]  But from around 1050, Arabic, Jewish and Greek intellectual manuscripts started to become more available in the West in Latin translations.[2] [3]  These translations of ancient works had a major impact on Medieval European thought.  For instance, according to Pasnau, ‘when James of Venice translated Aristotle’s Posterior Analytics from Greek into Latin in the second quarter of the twelfth century, ‘European philosophy got one of the great shocks of its long history’.[4] This book had a dramatic impact on ‘natural philosophy’, as science was then called.

Under Pope Gregory VII, a Roman synod had in 1079 decreed that all bishops institute the teaching of liberal arts in their cathedrals.[5]  In the early twelfth century, universities began to emerge from Cathedral schools, in response to the Gregorian reform and demands for literate administrators, accountants, lawyers and clerics.  The curriculum was loosely based on the seven liberal arts, consisting of a trivium of grammar, dialectic and rhetoric; plus a quadruvium of music, arithmetic, geometry and astronomy.[6]  Besides the liberal arts, some (but not all) universities offered three professional courses of law, medicine and theology.[7]

Dialectic was a method of learning by the use of arguments in a question and answer format, heavily influenced by the translations of Aristotle’s works.  This was known as ‘Scholasticism’ and included the use of logical reasoning as an alternative to the traditional appeals to authority.[8] [9]  For the first time, philosophers and scientists studied in close proximity to theologians trained to ask questions.[10]

At this stage, the most influential scientist was Robert Grosseteste (1175-1253) who was a leading English scholastic philosopher, scientist and theologian.  After studying theology in Paris from 1209 to 1214, he made his academic career at Oxford, becoming its Chancellor in 1234.[11]  He later became the Bishop of Lincoln, where there is now a university named after him. According to Luscombe, Grosseteste ‘seems to be the single most influential figure in shaping an Oxford interest in the empirical sciences that was to endure for the rest of the Middle Ages’.[12]

Grossetesste

Robert Grossteste  (1175-1253)

Grosseteste’s knowledge of Greek enabled him to participate in the translation of Aristotelian science and ethics.[13] [14]  In the first Latin commentary on Aristotle’s Posterior Analytics, from the 1220s, Robert Grosseteste distinguishes four ways in which we might speak of scientia, or scientific knowledge.

‘It does not escape us, however, that having scientia is spoken of broadly, strictly, more strictly, and most strictly. [1] Scientia commonly so-called is [merely] comprehension of truth. Unstable contingent things are objects of scientia in this way. [2] Scientia strictly so-called is comprehension of the truth of things that are always or most of the time in one way. Natural things – namely, natural contingencies – are objects of scientia in this way. Of these things there is demonstration broadly so-called. [3] Scientia more strictly so-called is comprehension of the truth of things that are always in one way. Both the principles and the conclusions in mathematics are objects of scientia in this way. [4] Scientia most strictly so-called is comprehension of what exists immutably by means of the comprehension of that from which it has immutable being. This is by means of the comprehension of a cause that is immutable in its being and its causing.’[15]

Grosseteste’s first and second ways of describing scientia refer to the truth of the way things are by demonstration, that is by empirical observation.

Grosseteste himself went beyond Aristotelian science by investigating natural phenomena mathematically as well as empirically in controlled laboratory experiments.  He studied the refraction of light through glass lenses and drew conclusions about rainbows as the refraction of light through rain drops.[16]

Although Grosseteste is credited with introducing the idea of controlled scientific experiments, there is doubt whether he made this idea part of a general account of a scientific method for arriving at the principles of demonstrative science. [17]  This role fell to his disciple Roger Bacon (c.1219/20-c.1292CE) who was who was also an English philosopher, but unlike Bishop Grosseteste, Bacon was a Franciscan friar.

Roger Bacon (c.1219/20-c.1292)

Bacon taught in the Oxford arts faculty until about 1247, when he moved to Paris which he disliked and where he made himself somewhat unpopular.  The only Parisian academic he admired was Peter of Maricourt, who reinforced the importance of experiment in scientific research and of mathematics to certainty.[18]

As a scientist, Roger Bacon continued Grosseteste’s investigation of optics in a laboratory setting.  He supplemented these optical experiments with studies of the physiology of the human eye by dissecting the eyes of cattle and pigs.[19]  Bacon also investigated the geometry of light, thus further applying mathematics to empirical observations.  According to Colish, ‘the very idea of treating qualities quantitatively was a move away from Aristotle, who held that quality and quantity are essentially different’.[20]

The most important work of Roger Bacon was his Opus Majus (Latin for ‘Greater Work’) written c.1267CE.  Part Six of this work contains a study of Experimental Science, in which Bacon advocates the verification of scientific reasoning by experiment.

‘…I now wish to unfold the principles of experimental science, since without experience nothing can be sufficiently known. For there are two modes of acquiring knowledge, namely, by reasoning and experience. Reasoning draws a conclusion and makes us grant the conclusion, but does not make the conclusion certain, nor does it remove doubt so that the mind may rest on the intuition of truth, unless the mind discovers it by the path of experience;..’[21]

Bacon’s aim was to provide a rigorous method for empirical science, analogous to the use of logic to test the validity of deductive arguments.  This new practical method consisted of a combination of mathematics and detailed experiential descriptions of discrete phenomena in nature. [22]  Roger Bacon illustrated his method by an investigation into the nature and cause of the rainbow.  For instance, he calculated the measured value of 42 degrees for the maximum elevation of the rainbow.  This was probably done with an astrolabe, and by this technique, Bacon advocated the skillful mathematical use of instruments for an experimental science.[23]

Optics from Roger Bacon’s De multiplicatone specierum

The optical experiments that both Grosseteste and Bacon conducted were of practical usefulness in correcting deficiencies in human eyesight and the later invention of the telescope.  But more importantly, Roger Bacon is credited with being the originator of empirical scientific methods that were later further developed by scientists such as Galileo Galilei, Francis Bacon and Robert Hooke.  This is notwithstanding the twentieth century criticism of inductive scientific methods by philosophers of science such as Karl Popper, in favour of empirical falsification.[24]

The benefits of science to humanity – especially medical science – are well known and one example should suffice here.  An essential component of medical science is the clinical trial, which is the empirical testing of a proposed treatment on a group of patients whilst using another group of untreated patients as a blind control group to isolate and statistically measure the effectiveness of the treatment, whilst keeping all other factors constant.  This empirical approach is vastly superior to the theoretical approach of ancient physicians such as Hippocrates and Galen, and owes much to the pioneering work of Grosseteste and Bacon.  This is why I think that the development of empirical scientific methods was even more important than the revival of classical logic and reason, in terms of practical benefits to humanity. However, it is somewhat ironic that the later clashes between religion and science had their origins in the pioneering experiments of a bishop and a friar.

Whilst the twelfth century revival of classical logic and reason was very significant in terms of Western intellectual progress generally, the development of empirical scientific methods were in my view the most important intellectual endeavor of the European thirteenth century; and Bacon’s contribution to this was greater than that of Grosseteste because he devised general methodological principles for later scientists to build upon.

BIBIOGRAPHY

 Primary sources

Bacon, Roger, Opus Majus. a Translation by Robert Belle Burke. (New York, Russell & Russell, 1962).

Grosseteste, Robert, Commentarius in Posteriorum Analyticorum Libros. In Pasnau, Robert ‘Science and Certainty,’ R. Pasnau (ed.) Cambridge History of Medieval Philosophy (Cambridge: Cambridge University Press, 2010).

Secondary works

Colish, Marcia, L., Medieval foundations of the Western intellectual tradition (New Haven: Yale University Press, 1997).

Hackett, Jeremiah, ‘Roger Bacon’, The Stanford Encyclopedia of Philosophy (Spring 2015 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/spr2015/entries/roger-bacon/&gt;.

Kenny, Anthony Medieval Philosophy  (Oxford: Clarendon Press 2005).

Lewis, Neil, ‘Robert Grosseteste’, The Stanford Encyclopedia of Philosophy (Summer 2013 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/sum2013/entries/grosseteste/&gt;.

Luscombe, David, Medieval thought (Oxford: Oxford University Press, 1997).

Moran Cruz, Jo Ann and Richard Geberding, ‘The New Learning, 1050-1200’, in Medieval Worlds: An Introduction to European History, 300-1492 (Boston: Houghton Mifflin, 2004), pp.350-376.

Pasnau, Robert ‘Science and Certainty,’ in R. Pasnau (ed.) Cambridge History of Medieval Philosophy (Cambridge: Cambridge University Press, 2010).

Popper, Karl The Logic of Scientific Discovery. (London and New York 1959).

ENDNOTES

[1] Colish, Marcia, L., Medieval foundations of the Western intellectual tradition (New Haven: Yale University Press, 1997).pp.x-xi

[2] Moran Cruz, Jo Ann and Richard Geberding, ‘The New Learning, 1050-1200’, in Medieval Worlds: An Introduction to European History, 300-1492 (Boston: Houghton Mifflin, 2004), p.351.

[3] Colish, p.274.

[4] Pasnau, Robert ‘Science and Certainty,’ in R. Pasnau (ed.) Cambridge History of Medieval Philosophy (Cambridge: Cambridge University Press, 2010) p.357.

[5] Moran Cruz and Geberding p.351.

[6] Ibid. p.353

[7] Ibid. p. 356.

[8] Ibid, p.354.

[9] Colish, p.169.

[10] Colish, p.266.

[11] Colish, p.320.

[12] Luscombe, David, Medieval thought (Oxford: Oxford University Press, 1997). p.87.

[13] Colish, p.320.

[14] Luscombe, p.86.

[15] Grosseteste, Robert, Commentarius in Posteriorum Analyticorum Libros. In Pasnau, Robert ‘Science and Certainty,’ R. Pasnau (ed.) Cambridge History of Medieval Philosophy (Cambridge: Cambridge University Press, 2010) p. 358..

[16] Colish, p.320.

[17] Lewis, Neil, ‘Robert Grosseteste’, The Stanford Encyclopedia of Philosophy (Summer 2013 Edition), Edward N. Zalta (ed.),

[18] Kenny, Anthony Medieval Philosophy  (Oxford: Clarendon Press 2005). p.80.

[19] Colish, p.321.

[20] Colish, pp.321-322.

[21] Bacon, Roger Opus Majus. a Translation by Robert Belle Burke. (New York, Russell & Russell, 1962) p.583

[22] Hackett, Jeremiah, ‘Roger Bacon’, The Stanford Encyclopedia of Philosophy (Spring 2015 Edition), Edward N. Zalta (ed.), Section 5.4.3.

[23] Hackett, Section 5.4.3.

[24] Popper, Karl The Logic of Scientific Discovery.(London and New York 1959). Ch. 1.’…the theory to be developed in the following pages stands directly opposed to all attempts to operate with the ideas of inductive logic.’

Copyright notice: © All rights reserved. Except for personal use or as permitted under the Australian Copyright Act, no part of this website may be reproduced, stored in a retrieval system, communicated or transmitted in any form or by any means without prior written permission. All inquiries should be made to the copyright owner, Tim Harding at tim.harding@yandoo.com, or as attributed on individual blog posts.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

1 Comment

Filed under Essays and talks

Descartes’ cogito and certainty

By Tim Harding

Rene Descartes (1596-1650CE) was a French mathematician, scientist and philosopher.  According to Copleston (1994:63-89) these three interests of his were interrelated, in the sense that he had a mathematical and scientific approach to his philosophy.  Mathematics ‘delighted him because of its certainty and clarity’ (Copleston 1994: 64).  His fundamental aim was to attain philosophical truth by the use of reason and scientific methods.  For him, the only kind of knowledge was that of which he could be certain.  His ideal of philosophy was to discover hitherto uncertain truths implied by more fundamental certain truths, in a similar manner to mathematical proofs (Copleston 1994: 66-70).

Using this approach, Descartes (1996) engages in a series of meditations to find a foundational truth of which he could be certain, and then to build on that foundation a body of implied knowledge of which he could also be certain.  He does this in a methodical way in his First Meditation by first withholding assent from opinions which are not completely certain, that is, where there is at least some reason for doubt, such as those acquired from the senses (Descartes 1996: 12).

descartes

Next, in his Second Meditation, Descartes concludes that one proposition of which he can be certain is ‘I am, I exist’ (Descartes 1996: 12).  Interestingly, in this text Descartes does not actually use the famous words ‘Cogito, ergo sum’ (which mean ‘I think, therefore I exist’) which he used in a slightly earlier work Discourse on Method.  This difference in wording has implications for the discussion which follows in this essay; however, for simplicity, I shall refer to this proposition as ‘the cogito’.

The central question for this essay is – how did Descartes come to be certain that the cogito is true?  There are rival interpretations of the basis of this certainty.  Is it as a result of an inference from the premise ‘I think’, or is it derived from a different type of reasoning in which ‘I think’ is not needed as a premise?  The former is in the traditional form of an argument in which a conclusion is logically deduced from one or more premises; whereas the latter is in not in the form an argument, such as intuition, or as I shall later suggest a ‘performative utterance’.

One of the difficulties in making these interpretations is that Descartes himself is not entirely consistent in the various expositions of his views in different texts, nor in his responses to objections to those views.  Another difficulty is that certain philosophical or linguistic concepts such ‘performative utterance’ had not been developed that at that time.  To clarify, I intend to analyse the cogito in terms of modern day philosophy, rather than as a historical investigation into what Descartes meant at the time.

The first interpretation is that the cogito is a deductive argument with a missing but implied first premise in the following traditional syllogistic form:

Premise 1: Everything that thinks exists.

Premise 2: I think.

Conclusion: Therefore, I exist.

This is a valid deductive argument known from antiquity as modus ponens.  The general form is ‘P implies Q; P is asserted to be true, so therefore Q must be true’.  As is the case with all valid deductive arguments, if the premises are true, then the conclusion must be true by virtue of the argument’s logical form.  However, the problem in this particular case is that we do not know that Premise 1 is true – that has not yet been established.  So although the argument is valid we cannot say that the conclusion ‘I exist’ is true on this basis.  For this reason, I do not think that this interpretation of the cogito is the correct one.

It is worth mentioning that Descartes himself denies that the cogito is a syllogism in his reply to the Second Objections:

When we observe that we are thinking beings, this is a sort of primary notion, which is not the conclusion of any syllogism; and, moreover, when somebody says ‘I am thinking, therefore I am or exist’, he is not using a syllogism to deduce his existence from his thought, but recognising this as something self-evident, in a simple mental intuition (Descartes 1996: 68).

Descartes’ words in this quotation are consistent with the alternative interpretation that the proposition ‘I exist’ is self-evident as a result of intuition.  In this interpretation, we do not need either of the premises ‘Everything that thinks exists’ or ‘I think’ so there is no inference or deductive argument, let alone a syllogism.

I find the notion of intuition too vague for philosophical purposes – it seems to belong more in the realm of psychology or neuroscience.

Williams (1978) has endeavored to explain this alternative interpretation in terms of incorrigibility and self-verification.  A proposition p is incorrigible when it satisfies this description: if I believe that p, then p, for example ‘If I feel pain, I am in pain’.  This explanation has some similarities to Austin’s concept of a performative utterance (or ‘performative’ for short) where the utterance of a statement (in the appropriate circumstances) serves not only to describe an act but to actually perform the act (Austin 1962: 6).  So to say ‘I exist’ performs the act of existing.  The statement could not be made unless the person making it exists.

According to Williams (1978) the proposition ‘I think’ is self-evident because it satisfies the description: if p, then I believe that p.  If I think, then I believe that I think.  The proposition ‘I think’ is thus evident to me, in a way that the proposition ‘I exist’ is not.  While ‘I exist’ is incorrigible, it is not evident to me in the same way that ‘I think’ is evident.  Under this interpretation, Descartes has a reason for choosing to begin his argument with the premise ‘I think’ (Townsend 2004: 26).

In summary, the interpretation of the cogito being an inference or deductive fails for the reasons I have given.  In my view, the combination of incorrigibility and self-verification provides a sufficient justification for the truth of the statement ‘I think, I am’, especially when incorrigibility is explained in terms of a performance utterance.

REFERENCES

Austin, J.L. (1962) How To Do Things With Words. London, Oxford University Press.

Copleston, F. (1994) A History of Philosophy Volume IV: Modern Philosophy. New York, Bantam Doubleday Publishing Group.

Descartes, R. (1996) Meditations on First Philosophy: With Selections from the Objections and Replies, trans. and ed. John Cottingham, Cambridge, Cambridge University Press.

Williams, B. (1978) ‘Descartes: the Project of Pure Enquiry’ in Descartes and the Defence of Reason, 2004Study Guide ed. Aubrey Townsend, Clayton, Monash University.

Townsend, A. ed. (2004) Descartes and the Defence of Reason, Study Guide, Clayton, Monash University.

Leave a comment

Filed under Essays and talks

The Edict of Milan

The Edict of Milan was an agreement made in 313CE between the then Western Roman Emperor Constantine and Licinius who at the time ruled the Balkans, but soon became the ruler of the eastern half of the former Roman Empire.  It was called the Edict of Milan because that was where they met and made the agreement, although the actual document was promulgated by Licinius a few months later at Nicomedia, which is today known as Izmit in Turkey. [1]

The substance of the agreement was to tolerate Christianity and other religions within their respective regions of control. [2]  It was also agreed to restore the properties and possessions that the Christian Church had lost under the previous Roman Emperor Diocletian, who was intolerant of Christianity.  It followed a slightly earlier Edict of Toleration issued by the Roman Emperor Galerius which officially ended the Diocletian persecution of Christians.[3]

In practice, the Edict of Milan was more like an edict or executive order than merely the record of an agreement.  It was issued to provincial governors who were expected to publicise its contents to the public and presumably to implement it.[4]  On the other hand, it was not included in the Theodosian Code of Roman laws and edicts compiled in 438CE.[5]

The original Edict of Milan was issued in multiple copies handwritten in Latin, but none of these copies has survived.  The primary source under examination here was quoted by the Christian apologist Lactantius in his On the Deaths of the Persecutors (De moribus persecutorum), thought to be written in 318 CE.  Although there was different version written in Greek by Eusebius, scholars think that the one in Lactantius is the original.[6]

The actual wording of the Edict was most likely drafted by civil servants in an imperial office.  However, as it records an agreement reached by the two emperors, and was published with their authority and approval (since they did not retract it), I don’t think it really matters in practice who recorded the agreement into writing.[7]  The authoritative and enduring nature of the document overrides any doubts about its authorship, in my view.

There are various historical theories as to why the Edict was published.  According to a biography by Eusebius,  Constantine had a miraculous vision of an illuminated Cross on the day before the Battle of Milvian Bridge against his rival Maxentius outside of Rome in 312CE.  The following morning Constantine ordered the Christian symbol to be drawn on his soldiers’ shields.  This event predated the Edict and may have been the real moment of Constantine’s conversion to Christianity. [8]

By this stage, the numbers of Christians within Empire were still small but increasing.[9] The most severe persecution of the Christians was conducted by the Roman Emperor Diocletian and yet it failed.  Bennett suggests that the Empire thus had little choice but to accommodate itself to this new and growing religion.[10]  She thinks that Constantine pragmatically nurtured Christianity in the hope that it ‘might provide a sort of glue to hold his empire together’ and subsequent Christian emperors followed his lead.[11]

In my view, the main purpose of the Edict was to promote peace and stability within the areas controlled by the two rulers.  The Edict specifically refers to ‘the sake of peace in our time’, ‘public well-being’ and ‘the interests of public quiet’.[12]  Another purpose was to shore up the public support of Licinius against his rival Maximinus Daia, who had renewed the persecution of Christians.  Thus the Edict forged a political and strategic alliance between the two rulers, as treaties often do.  In view of Constantine’s claimed miraculous vision before the Battle of Milvian Bridge, Constantine may in a sense have been forming an alliance with the Christian God as well.

 

ENDNOTES

[1] Kathleen Neal ed., Medieval Europe ATS1316 Unit Reader (Clayton: Monash University, 2016), p.11.

[2] Constantine and Lucinius ‘Edict of Milan’, in Ehler, Sidney Z. and Morrall, John B. trans. and eds. Church and State Through the Centuries: A Collection of Historic Documents with Commentaries, repr. in Medieval Europe ATS1316 Unit Reader ed. by Dr Kathleen Neal (Clayton: Monash University, 2016), pp.12-13.

[3] Galerius ‘Edict of Toleration’, in Internet History Sourcebooks Project http://legacy.fordham.edu/halsall/source/edict-milan.asp [Accessed 21 March 2016].

[4] Neal, p.10.

[5] Neal, p.13.

[6] Neal, p.12.

[7] This is why I have attributed Constantine and Licinius as the authors of the Edict, rather than ‘Anonymous’.

[8] Clifford Backman, The Worlds of Medieval Europe (Oxford: Oxford University Press, 2015), p.42.

[9] Ibid., p.43.

[10] Judith Bennett, Medieval Europe – A Short History (New York: McGraw-Hill, 2011), p.13.

[11] Ibid., p.13

[12] Constantine and Lucinius, p.13.

Leave a comment

Filed under Essays and talks

The Dark Ages

by Tim Harding

(An edited version of this essay was published in The Skeptic magazine,
March 2016, Vol 35, No. 1, under the title ‘In the Dark’).

Like other skeptics, I often despair at the apparent decline in the public understanding of science.  Anti-science, pseudoscience, quackery, conspiracy theories and the general distrust of experts seem to be on the rise.  I sometimes even wonder whether we in danger of regressing into a new dark age.

So what do we mean by a ‘dark age’? Was there really a dark age in post-Roman Europe? If so, what were its most likely causes? These questions are difficult to answer, and not just because of disagreements among historians. The difficulty is in some ways circular – we call the post-Roman period a ‘dark age’ because we don’t know enough about it (relative to the periods before and after); and we don’t know enough about it because not much was written down at the time.

WDF_809731My own observation is that western civilisation has already suffered two dark ages about 1300 years apart.  (There was an earlier dark age in Ancient Greece from around 1100 to 800 BCE).  If this ‘trend’ is repeated we should be due for another one in a couple of hundred years’ time.

The term ‘Dark Ages’ commonly refers to the Early Middle Ages, which was the period of European history lasting from the 5th century to approximately 1000 CE. The Early Middle Ages followed the decline of the Western Roman Empire and preceded the Central Middle Ages (c. 1001–1300 CE) and the Late Middle Ages (1300-1500CE). The period saw a continuation of downward trends begun during late classical antiquity, including population decline, especially in urban centres, a decline of trade, and increased translocation of peoples. There is a relative paucity of scientific, literary, artistic and cultural output from this time, especially in Western Europe.

Historians suggest that there were several causes of this decline, including the rise of Christianity. The other causes are often overlooked, especially by antitheists trying to score points such as ‘look what happened when your mob was in charge!’.  So like good skeptics, let’s examine the historical evidence.

The Greek Dark Age 

The first European dark age occurred in ancient Greece from around 1100 to 800 BCE.  The archaeological evidence shows a collapse of the Mycenaean Greek civilization at the outset of this period, as their great palaces and cities were destroyed or abandoned and vital trade links were lost.  Unfortunately, their Linear B script also disappeared, leaving us with no written accounts of what really happened or why.

Linear B script

Linear B script. Source: Wikimedia Commons

Legend has it that Greece was invaded by the mysterious Dorians from the north and/or the Sea Peoples of uncertain origin but possibly from the Black Sea area.  The archeological evidence is of little help to us other than showing a simpler geometrical style of pottery art than that of the Mycenaeans, hinting at occupation by a different culture.

pottery

Geometric (9th-7th century BCE) pottery from Melos in Greece. Source: Wikimedia Commons

There is archeological evidence of a revival of Greek trade at the beginning of the 8th century BCE, coupled with the appearance of a new Greek alphabet system adapted from the Phoenicians which is still in use today.  This led to creation of western civilisation’s oldest extant literary works, such as Homer’s The Odyssey and The Iliad.  From succeeding centuries we have been bequeathed major texts of ancient Greek drama, history, philosophy and science.

The decline of the Roman Empire

The Roman Empire reached its greatest territorial extent during the 2nd century CE, reaching from Babylonia in the East to Spain in the West, Britain and the Netherlands in the North, to Egypt and North Africa in the South.  The following two centuries witnessed the slow decline of Roman control over its outlying territories.  The Emperor Diocletian split the empire into separately administered eastern and western halves in 286CE.  In 330CE, after a period of civil war, Constantine the Great refounded the city of Byzantium as the newly renamed eastern capital, Constantinople.

During the period from 150 to 400CE, the population of the Roman Empire is estimated to have fallen from 65 million to 50 million, a decline of more than 20 percent. Some have connected this to the Dark Ages Cold Period (300–700CE), when there was a decrease in global temperatures which impaired agricultural yields.

In 400CE, the Visigoths invaded the Western Roman Empire and, although briefly forced back from Italy, in 410CE they sacked the city of Rome.  The Vandals again sacked Rome in 455CE.  The deposition of the last emperor of the west, Romulus Augustus, in 476CE has traditionally marked the end of the Western Roman Empire.  The Eastern Roman Empire, often referred to as the Byzantine Empire after the fall of its western counterpart, had little ability to assert control over the lost western territories.  Although the movements of peoples during this period are usually described as ‘invasions’, they were not just military expeditions but migrations of entire peoples into the empire.  These were mainly rural Germanic peoples who knew little of cities, writing or money.  Administrative, educational and military infrastructure quickly vanished, leading to the collapse of the schools and to a rise of illiteracy even among the leadership.

Invasions_of_the_Roman_Empire_1

For the formerly Roman area, there was another 20 percent decline in population between 400 and 600CE, or a one-third decline between 150-600CE which had significant economic consequences.  To make matters worse, the Plague of Justinian (541–542CE), which has since been found to have been bubonic plague, recurred periodically for 150 years – killing as many as 50 million people in Europe.  The population of the city of Rome itself declined from about 450,000 in 100CE to only 20,000 during the Early Middle Ages.  The city of London was largely abandoned.

In the 8th century, the volume of trade reached its lowest level, indicated by very small number of shipwrecks found in the western Mediterranean sea.

One of the main consequences of the fall of Rome was breakdown of the strict Roman law and order, resulting amongst other things in the running away of slaves who had performed most of the labour.  Less food and fibre was produced on farms, resulting in people leaving the cities to less efficiently grow their own.  Lower agricultural activity resulted in reforestation, or in other words the forests naturally grew back.  Travel and trade by land became less safe, exacerbating the economic decline.

The role of the Christians 

The Catholic Church was the only centralized institution to survive the fall of the Western Roman Empire intact.  It was the sole unifying cultural influence in Western Europe, preserving Latin learning, maintaining the art of writing, and preserving a centralized administration through its network of bishops ordained in succession. The Early Middle Ages are characterized by the control of urban areas by bishops and wider territorial control exercised by dukes and counts.  The later rise of urban communes marked the beginning of the Central Middle Ages.

During the Early Middle Ages, the divide between eastern and western Christianity widened, paving the way for the East-West Schism in the 11th century. In the West, the power of the Bishop of Rome expanded.  In 607CE, Boniface III became the first Bishop of Rome to use the title Pope.  Pope Gregory the Great used his office as a temporal power, expanded Rome’s missionary efforts to the British Isles, and laid the foundations for the expansion of monastic orders.

The institutional structure of Christianity in the west during this period is different from what it would become later in the Central Middle Ages. As opposed to the later church, the church of the Early Middle Ages consisted primarily of the monasteries.  In addition, the papacy was relatively weak, and its power was mostly confined to central Italy. Religious orders wouldn’t proliferate until the Central Middle Ages. For the typical Christian at this time, religious participation was largely confined to occasionally receiving mass from wandering monks. Few would be lucky enough to receive this as often as once a month.  By the end of the Dark Ages, individual practice of religion was becoming more common, as monasteries started to transform into something approximating modern churches, where some monks might even give occasional sermons.  Thus the evidence for powerful centralised Christian control during the Dark Ages is lacking.

The Western European Dark Age

The concept of a Western European Dark Age originated with the Italian scholar Petrarch in the 1330s CE.  Petrarch regarded the post-Roman centuries as ‘dark’ compared to the light of classical antiquity.  The Protestant reformers of the 16th century had an interest in disparaging the ‘Dark Ages’ as an era of Catholic control, when they (the Protestants) thought that Christianity had ‘gone off the rails’.  Later historians expanded the term to refer to the transitional period between Roman times and the Central Middle Ages (c. 11th–13th century), although in the 20th century the Dark Ages were contracted back to the Early Middle Ages (500-1000CE) again.  I shall refer to this period in western Europe as ‘the Dark Ages’ from here on.

Evidence for the Dark Ages include the lack of output of manuscripts (both originals and copies), a lack of contemporary written history, general population decline, a paucity of inventions, a lack of sea trade, restricted building activity and limited material cultural achievements in general.

The lack of manuscripts in the Dark Ages compared to later Middle Ages is illustrated  by the following graph.

manuscripts

The Romans were remarkable innovators.  Their inventions of materials included kiln-fired bricks, cement, concrete, wood veneer, cast iron, glassware and surgical instruments.  In construction, they invented paved roads, bridges, tunnels, aqueducts, arches, domes, dams, water supply, drainage, sewerage and even underfloor heating.  Their production technology included the wheeled plow, the two-field crop system, harvesting machines, paddlewheel mills, the screw press, the force pump, steam power, gearing, pulleys and cranes.  The Romans were also admired for their mining technology.

In contrast, hardly any new technology was invented during the Dark Ages.  Nor were there any scientific discoveries of note; although science and mathematics continued to flourish in the Islamic world, as discussed below.  Yet later in the Central Middle Ages, technological inventions included windmills, mechanical clocks, transparent glass, distillation, the heavy plow, horseshoes, harnesses, stirrups and more powerful crossbows.  Architectural innovations enabled the building of larger cathedrals and faster ships.

The invention of the three-field system towards the end of the Dark Ages, coupled with higher temperatures and the heavy plow enabled higher agricultural yields, which kick-started economic recovery and the resumption of trade.  Amongst other things, the three-field system created a surplus of oats that could be used to feed more horses.  It also required a re-organisation of land tenure that led to manoralism and feudalism.

In the ancient world, Greek was the primary language of science. Advanced scientific research and teaching was mainly carried on in the Hellenistic side of the Roman empire, and in Greek. Late Roman attempts to translate Greek writings into Latin had limited success.  As the knowledge of Greek declined, the Latin West found itself cut off from some of its Greek philosophical and scientific roots.

In the late 8th century, there was renewed interest in Classical Antiquity as part of the short-lived Carolingian Renaissance of the early 9th century CE.  Charlemagne carried out a reform in education. From 787CE on, decrees began to circulate recommending the restoration of old schools and the founding of new ones across the empire.  Institutionally, these new schools were either under the responsibility of a monastery (monastic schools), a cathedral, or a noble court. The teaching of dialectic (a discipline that corresponds to today’s informal logic) was responsible for the increase in the interest in speculative inquiry; from this interest would follow the rise of the Scholastic tradition of Christian philosophy.  In the 12th and 13th centuries, many of those schools that were founded under the auspices of Charlemagne, especially cathedral schools, would become universities.

The expansion of Islam

After the death of the prophet Mohammed in 632CE, Islamic forces conquered much of the former Eastern Roman Empire and Persia, starting with the Middle East and Arabian Peninsula in the early 7th century, North Africa in the later 7th century, and much of the Iberian Peninsula (Spain and Portugal) in 711CE.  This Islamic Empire was known as the Umayyad Caliphate.  Its capital was the Spanish city of Cordoba, which by the 10th century CE had become the world’s largest city, with an estimated population of around 500,000.

The Umayyad Caliphate in 750 CE

The Islamic conquests reached their peak in the mid-8th century.  The defeat of Muslim forces at the Battle of Poitiers in 732 led to the re-conquest of southern France by the Franks, but the main reason for the halt of Islamic growth in Europe was the overthrow of the Umayyad dynasty and its replacement by the Abbasid dynasty based in Babylon.

The works of Euclid and Archimedes, lost in the West, were translated from Arabic to Latin in Spain. The modern Hindu-Arabic numerals, including a notation for zero, were developed by Hindu mathematicians in the 5th and 6th centuries. Muslim mathematicians learned of it in the 7th century and added a notation for decimal fractions in the 9th and 10th centuries.  In the course of the 11th century, Islam’s scientific knowledge began to reach Western Europe, via Islamic Spain.

Conclusions

The former Roman Empire was replaced by three civilisations – Western Europe, the Byzantine Empire and the Islamic Caliphate.  The Dark Ages really only refer to one of these civilisations, Western Europe, where there is significant historical evidence of a marked decline in scientific, technological, agricultural, economic, educational and literary activities during this period.  There was also a considerable decline in the population of Western Europe, notwithstanding migrations of Germanic peoples from northern Europe.  Christianity is likely to have been only one of several causes of the Dark Ages.

References

Backman, Clifford R. (2015) The Worlds of Medieval Europe. Oxford University Press, Oxford.

Bennett, Judith M. (2011) Medieval Europe – A Short History. McGraw- Hill, New York.

Gibbon, Edward (1788). The History of the Decline and Fall of the Roman Empire. Vol. 6, Ch. XXXVII.

Tim Harding B.Sc. works as a regulatory consultant to various governments.  He is also studying medieval history at Monash University.

 

Leave a comment

Filed under Essays and talks