Category Archives: Essays and talks

Essays and talks written by Tim Harding

The Sceptical Chymist – the switch from alchemy to chemistry

by Tim Harding, B.Sc. (biochemistry), B.A. (philosophy)

(An edited version of this essay was published in The Skeptic magazine,
March 2019, Vol 39 No 1)

Unlike physics, astronomy and biology, chemistry is a relatively new science – less than 400 years old. Yet for thousands of years, people have been extracting chemicals from plants for medicines, dyes and perfumes; fermenting beer and wine; making pottery and glazes; rendering fat into soap; making glass; extracting metals from ores and making alloys like bronze. But this does not mean there was any knowledge of the underlying chemistry involved.

For instance, metal-working and smithing have existed since the Bronze Age, which began with the rise of the Mesopotamian civilisation in the mid-4th millennium BCE.  Bronze was harder and more durable than other metals available at the time, and thus better suited for making weapons and armour. It was made by smelting copper and alloying with tin, arsenic, or other metals. This technology was largely invented by trial and error, without any chemical knowledge of the nature of metals or alloys. The science of chemistry did not exist at all in these ancient times.

It has been claimed by some writers that alchemy was a precursor to chemistry, or that chemistry ‘evolved’ from alchemy. I think this is wrong. Chemistry no more evolved from alchemy than astronomy evolved from astrology. Alchemy was a mystical pseudoscience like astrology, rather than being a protoscience of chemistry.  The eventual mainstream switch from alchemy to chemistry in the 17th century was quite rapid – more like a revolution than evolution. It has been suggested that this was due to the development of scientific methods. I think this is also wrong, for reasons I shall later explain.

Alchemy was practised throughout Europe, Africa, and Asia; but as this essay is about the transition from alchemy to chemistry, which happened in Europe, I shall focus on western alchemy.

WHAT IS ALCHEMY?

The ‘holy grail’ of Western alchemy was the production of the fabled ‘Philosopher’s Stone’, which really had nothing to do philosophy but was supposed to bestow spiritual wealth and immortality. This Stone would also enable the alchemist to turn base metals such as lead into silver and gold. In theory, this was merely the test employed to check whether the Stone was genuine, but in practice it became the main driver of alchemical experimentation.

Other goals of alchemy included the creation of panaceas able to cure any disease; and the development of ‘alkahest’, a hypothetical universal solvent able to dissolve every other substance, including gold. (A potential problem with alkahest is that, if it dissolves everything, then it cannot be placed into a container because it would also dissolve the container).

Western alchemists continued antiquity’s belief in the classical ‘four elements’ of earth, water, air and fire. They held that metals grew slowly and naturally in the earth, the product of a ménage a trois between the otherwise opposing forces of mercury, sulphur and salt.  Alchemists tried to speed up these supposedly natural processes in the laboratory.

They guarded their work in secrecy including cyphers and cryptic symbolism, somewhat akin to astrological arcanery. Their work was guided by Hermetic principles relating to magic and mysticism. (These principles are named after Hermes Trismegistus, the purported ancient author of the Hermetic Corpus, a series of esoteric early Greek-Egyptian texts).

There were some connections between the two mystical pseudosciences of alchemy and astrology. The belief of the alchemists that all natural events are connected by a hidden thread, that everything has an influence on other things, that ‘what is above is as what is below,’ constrained them to place stress on the supposed connection between the planets and the metals, and to further their metallic transformations by performing them at times when certain ‘planets’ were in conjunction. The seven principal ‘planets’ and the seven principal metals were called by the same names: Sol (gold), Luna (silver), Saturn (lead), Jupiter (tin), Mars (iron), Venus (copper), and Mercury (mercury).

HISTORY OF WESTERN ALCHEMY

The beginnings of Western alchemy may generally be traced to ancient and Hellenistic Egypt, where the city of Alexandria was a centre of alchemical activity, and retained its pre-eminence through most of the early Greek and Roman periods. The oldest known alchemical texts are preserved on what is known as the Leiden Papyrus, which dates from around 300 CE. It is written in Greek, and contains 101 recipes for the production of fake gold, silver and dyes.

Maria Prophetessa (or Mary the Jewess), was possibly the first western alchemist. She is known from the works of Zosimos of Panopolis, as none of her writings have survived. Maria is thought to have lived between the first and third centuries CE, and is credited with the invention of several kinds of laboratory apparatus such as the eponymously named ‘bain-marie’.

Zosimos of Panopolis was a Greek-Egyptian alchemist and gnostic mystic who lived at the end of the 3rd and beginning of the 4th century CE. He was born in Panopolis (the present day Akhmim) in the south of Roman Egypt. He wrote the oldest known books on alchemy, which he called ‘Cheirokmeta’, using the Greek word for ‘things made by hand’. He is one of about 40 authors represented in a compendium of alchemical writings that was probably put together in Constantinople in the 7th or 8th century CE, copies of which exist in manuscripts in Venice and Paris. This was when the term ‘alchemy’ first began to be used.

As early as the 14th century CE, cracks seemed to grow in the facade of alchemy; and people started to became sceptical. In 1317, the Avignon Pope John XXII ordered all alchemists to leave France for making counterfeit money. A law was passed in England in 1403 which made the ‘multiplication of metals’ punishable by death. Despite these and other apparently extreme measures, alchemy did not die. The lure of making gold from lead was too much of a monetary magnet.

Several practical problems with alchemy emerged. There was no systematic naming scheme for new compounds, and the language was esoteric and vague to the point that the terminologies meant different things to different people. Indeed, many alchemists included in their methods irrelevant information such as the timing of the tides or the phases of the moon. Like astrology, the esoteric nature and codified vocabulary of alchemy appeared to be more useful in concealing the fact that they could not be sure of very much at all.

In fact, according to Brock (1992): ‘The language of alchemy soon developed an arcane and secretive technical vocabulary designed to conceal information from the uninitiated. To a large degree, this language is incomprehensible to us today, though it is apparent that readers of Geoffery Chaucer’s ‘The Canon’s Yeoman’s Tale’ or audiences of Ben Jonson’s ‘The Alchemist’ were able to construe it sufficiently to laugh at it.

The 16th-century Swiss alchemist Paracelsus (Philippus Aureolus Theophrastus Bombastus von Hohenheim, from whom the word ‘bombastic’ is derived) believed in the existence of alkahest.  He thought alkahest was an undiscovered element from which all other elements (earth, fire, water and air) were simply derivative forms. Paracelsus believed that this element was, in fact, the Philosopher’s Stone. Paracelsus advocated the tria prima (three primes) of salt, sulphur and mercury. They were not, however simply the substances which bear these names today. Salt was the prime of fixity and incombustibility, mercury of fusibility and volatility, and sulphur of flammability. So anything that burned was sulphur and different substances afforded different sulphurs, mercuries and salts. The Three Primes were thought to be related to the Law of the Triangle, in which two components come together to produce the third. These views may seem strange, even unintelligible to us but, even in the 17th century, they were still believed by some of the best brains of the time.

In 1608 the alchemist Sendivogius proposed that one metal could be propagated from another only in the order of superiority of the planets. He placed the seven planets in the following descending order: Saturn, Jupiter, Mars, Sol, Venus, Mercury, Luna. ‘The virtues of the planets descend,’ he said, ‘but do not ascend; it is easy to change Mars (iron) into Venus (copper), for instance, but Venus cannot be transformed into Mars’.

Even the great Isaac Newton dabbled in alchemy, for which we can forgive him in the absence of a mature science of chemistry in his time. According to Ackroyd (2007), the young Newton set up an alchemy laboratory in his chambers at Cambridge, and he had 175 alchemical books in his library – one tenth of the total. However, for Newton alchemy was a private interest – more like a hobby than a profession. He did not publish on the subject, and his writings consisted of personal notes and annotations on alchemical texts.

THE RISE OF CHEMISTRY

Chemistry is the science of matter at the atomic to molecular scale, dealing primarily with collections and interactions of atoms, such as molecules, crystals, and metals. Chemistry studies matter in solid, liquid and gaseous states. It really has nothing to do with alchemy – the only similarity being the use of laboratory experiments.

The Classical Greek philosopher Democritus (c. 460 – c. 370 BC) and later Epicurus and Leucippus held that everything is composed of ‘atoms’, which are physically indivisible; that between atoms, there lies empty space (called the void); that atoms are indestructible, and have always been and always will be in motion; that there is an infinite number of atoms and of kinds of atoms, which differ in shape and size. Although this early atomic theory appears to be more nearly aligned with that of modern science than any other theory of antiquity, it was a philosophical theory rather than a scientific theory. Classical Greek atomists could not possibly have had an empirical basis for modern concepts of atoms and molecules, so this was not the beginnings of chemistry.

Nevertheless, in the 17th century, a renewed interest arose in Classical Greek atomism. The major figures in this rebirth were Francis Bacon, René Descartes, Pierre Gassendi, and Robert Boyle, the latter being the first real chemist who is perhaps best known for Boyle’s law. This law describes the inversely proportional relationship between the absolute pressure and volume of a gas, if the temperature is kept constant within a closed system.

Robert Boyle

Once again, before the advent of chemistry, Robert Boyle (1627–1692) was an alchemist. He believed the transmutation of metals to be a possibility, and he carried out experiments in the hope of achieving it. In his ground-breaking book ‘The Sceptical Chymist’ (1661), Robert Boyle demonstrated problems that arise from alchemy, and he proposed atomism as a possible explanation, which soon became widely accepted amongst the physical sciences. Boyle called the alchemists who were disciples of Paracelsus ‘vulgar spagyrists’. Boyle showed that Paracelsus’s theories of the tria prima – salt, sulphur, and mercury – were totally inadequate to explain chemistry and he was the first to give a satisfactory definition of an element. Boyle endorsed the early atomistic view of elements as the undecomposable constituents of material bodies; and that atoms were of various sorts and sizes.  He also made the distinction between mixtures and compounds; and he made considerable progress in the technique of detecting their ingredients, a process which became designated by the term ‘chemical analysis’.

For Boyle, chemistry was the science of the composition of substances, not merely an adjunct to the arts of the alchemist or the physician. Chemistry soon became recognised as a legitimate science, alongside physics, geology and biology. As a result, Boyle has been whimsically called ‘The father of chemistry and the brother of the Earl of Cork’.

Wootten (2015) notes that although alchemy had once been respectable in the eyes of Newton and Boyle, it had become entirely disreputable by the 1720s. He states that this was a result of a series of ‘rhetorical’ moves by chemists in the Academie des Sciences.

Later pioneering chemists such Brandt, Cronsted, Black, Cavendish, Geoffrey, Priestley and Lavoisier built on the work of Boyle, but as this essay is about the transition from alchemy to chemistry, I do not propose to discuss their work in detail.

One exception is Antoine-Laurent de Lavoisier (1743 –1794), a French chemist who is celebrated as the ‘father of modern chemistry’. Lavoisier demonstrated with careful measurements that transmutation of water to earth was not possible, but that the sediment observed from boiling water came from the container. He burnt phosphorus and sulphur in air, and proved that the products weighed more than the original. Nevertheless, the weight gained was lost from the air. Thus, in 1789, Lavoisier established the Law of Conservation of Mass, which is also called ‘Lavoisier’s Law.’ By this investigation Lavoisier destroyed part of the experimental basis of alchemy, and established specific laboratory techniques by which chemical changes can be investigated, such as the use of the mass balance.

Lavoisier worked with Claude Louis Berthollet and others to devise a system of chemical nomenclature which serves as the basis of the modern system of naming chemical compounds.  Lavoisier’s Traité Élémentaire de Chimie (Elementary Treatise of Chemistry, 1789) was the first modern chemical textbook, and presented a unified view of new theories of chemistry. In addition, it contained a list of elements, or substances that could not be broken down further, which included oxygen, nitrogen, hydrogen, phosphorus, mercury, zinc, and sulphur. Lavoisier also established that elements could not be converted from one to the other, which was the final nail in the coffin of alchemy.

Later on, the English chemist John Dalton in 1803 proposed a modern atomic theory which stated that all matter was composed of small indivisible particles termed atoms, that atoms of a given element possess unique characteristics and weight, and that three types of atoms exist: simple (elements), compound (simple molecules), and complex (complex molecules). In 1808, Dalton first published a New System of Chemical Philosophy, in which he outlined the first modern scientific description of the atomic theory.

Pattison Muir (1902) gives credit to the Classical Greek atomists rather than the alchemists in inspiring the work of Boyle, Lavoisier, Dalton, and other early chemists. He says: ‘Instead of blaming the Greek philosophers for lack of quantitatively accurate experimental inquiry, we should rather be full of admiring wonder at the extraordinary acuteness of their mental vision, and the soundness of their scientific spirit’.

The demise of alchemy cannot be attributed to the development of scientific methods. This is because experimental scientific methods had already been developed around four hundred years earlier by the English philosophers Robert Grossteste and Roger Bacon, as explained in an essay of mine in the June 2016 issue of The Skeptic.

According to Wootton (2015), alchemy was never a science, and there was no room for it to survive among those who accepted a scientific approach. For they had something the alchemists did not: a critical community of scientific peers prepared to take nothing on trust. Wootton argues that alchemy and chemistry were both experimental disciplines, but they belonged to different types of community.

The demise of alchemy provides further evidence that what marks out modern science is not the conduct of experiments (alchemists conducted plenty of laboratory experiments), but the formation of a critical scientific community capable of peer reviewing discoveries and replicating results. Alchemy, as a clandestine enterprise, could never develop such a community. Wootton says that Popper was right to think that science can flourish only in an open society.

MODERN ALCHEMY

Today new interpretations of alchemy are still perpetuated, sometimes merging in concepts from Hippie, New Age or other radical countercultural movements. Even conservative Christian groups like the Rosicrucians and Freemasons have a continued interest in alchemy and its occult symbolism. According to Principe (2011), occultists reinterpreted alchemy as a spiritual practice, involving the self-transformation of the practitioner and only incidentally or not at all the transformation of laboratory substances, which has contributed to a merger of magic and alchemy in popular thought.

Some forms of quackery believe in the concept of the transmutation of natural substances, using alchemical or a combination of alchemical and spiritual techniques. In the practice of what is known as Ayurveda, the ‘samskaras’ are claimed to transform heavy metals and toxic herbs in a way that removes their toxicity. These mystical beliefs persist to the present day.

Two Spagyrists of the 20th century, Albert Richard Riedel and Jean Dubuis, merged Paracelsian alchemy with occultism, teaching laboratory pharmaceutical methods. The schools they founded, Les Philosophes de la Nature and The Paracelsus Research Society, popularized modern spagyrics including the manufacture of herbal tinctures and products. The courses, books, organizations, and conferences generated by their students continue to influence popular applications of alchemy as a New Age quackery practice.

REFERENCES

Ackroyd,  Peter. (2007) Isaac Newton. Vintage Books, London.

Boyle, Robert (1661) The Sceptical Chymist.  J. Crooke, London.
The Project Gutenberg eBook: http://www.gutenberg.org/files/22914/22914-h/22914-h.htm

Brock, William H. (1992). The Fontana History of Chemistry. Fontana Press, London.

Davidson, John S. (2001) Annotations to Boyle’s ‘The Sceptical Chymist’:
http://www.chem.gla.ac.uk/staff/alanc/annotations.pdf

Martin, Sean (2015) Alchemy and Alchemists. Pocket Essentials, Harpenden.

Pattison Muir, M. M., (1902) The Story of Alchemy and The Beginnings of Chemistry. The Project Gutenberg eBook http://www.gutenberg.org/files/14218/14218-h/14218-h.htm

Principe, Lawrence M. (2011) ‘Alchemy Restored’. Isis 102.2: 305-12.

Russell, Bertrand. (1961) History of Western Philosophy. 2nd edition. George Allen & Unwin, London.

Sendivogius (1608) The New Chemical Light. The Alchemy Web Site: <http://www.levity.com/alchemy/newchem1.html&gt;

Wootton, David (2015). The Invention of Science – A New History of the Scientific Revolution. Harper Perennial, New York.

About the author Tim Harding originally majored in biochemistry. He has also studied the history and philosophy of science twice – once as part of a science degree and more recently as part of an Arts degree majoring in philosophy.

Copyright notice: © All rights reserved. Except for personal use or as permitted under the Australian Copyright Act, no part of this website may be reproduced, stored in a retrieval system, communicated or transmitted in any form or by any means without prior written permission. All inquiries should be made to the copyright owner, Tim Harding at tim.harding@yandoo.com, or as attributed on individual blog posts.

If you find the information on this blog useful, you might like to consider making a donation.

Make a Donation Button

4 Comments

Filed under Essays and talks

What is a fact?

by Tim Harding

This might seem like a simple question, but the answer is not so straightforward. The Macquarie Dictionary defines a fact as ‘what has really happened or is the case; truth; reality’. This implies that facts are objective, as opposed to opinions which are subjective. The distinction is important to scientific skepticism, which looks for objective evidence in support of dubious claims that are made.

The usual test for a statement of fact is verifiability; that is, whether it can be demonstrated to correspond to either empirical observation or deductive proof (such as 2+2=4). For instance, the proposition ‘It is raining’ describes the fact that it actually is raining.  The rain that falls can be objectively measured in a rain gauge – it is not just a matter of opinion.

On the other hand, an opinion is a judgement, viewpoint, or statement about matters commonly considered to be subjective, such as ‘It is raining too much’.  As Plato said: ‘opinion is the medium between knowledge and ignorance’.

Philosophers generally agree that facts are objective and verifiable. However, there are two main philosophical accounts of the epistemic status of facts. The first account is equivalent in meaning to the dictionary definition – a fact is that which makes a true proposition true. In other words, facts describe reality independently of propositions. This means that there can be unknown facts yet to be discovered by science or other investigations. But we cannot verify a fact unless we know about it. So the other account is that a fact is an item of knowledge – a proposition that is true and that we are justified in believing is true.  This means that we either have to accept that there can be unknown and unverifiable facts, or we must adopt the position that facts are things that we know.

References

Mulligan, Kevin and Correia, Fabrice, ‘Facts‘, The Stanford Encyclopedia of Philosophy (Winter 2017 Edition), Edward N. Zalta (ed.)

Russell, Bertrand (1918) The Philosophy of Logical Atomism, Open Court, La Salle.

Russell, Bertrand (1950) An Inquiry Into Meaning and Truth, Allen & Unwin, London.

2 Comments

Filed under Essays and talks, Reblogs

Was William really a Conqueror?

by Tim Harding BSc BA

The ‘Norman Conquest’ is generally regarded as an epic event in English history – it was more than just a change of royal dynasty.  There is no doubt that William I[1] built castles and drastically changed the composition of the nobility and clergy, dispossessing many of their estates. However, the extent of legal and administrative changes he made to England is contested by different historians. In this essay, I argue that William initially minimised these changes to help legitimise his claim to the English throne. He emphasised continuity in English law and customs, to avoid the appearance of a Norman French takeover of England. But he later abandoned this strategy, and made some major and lasting changes to English law and administration.

William I of England

As the legitimacy of William’s claim to the English throne is of central relevance to the initial strategy of his reign, I shall discuss this claim first, before analysing the legal and administrative changes that he made. Unlike today, the law regarding royal succession was less clear in the eleventh century. It was a dangerous mix of inheritance, bequest and election by the Witan[2], without fixed rules,[3] resulting in the catastrophic succession conflict of 1066.[4]

The competing claims to the English throne

Belloc lists four justifications for William’s claim to the English throne. Firstly, his predecessor Edward the Confessor was a Norman in all that counted – speech, manners, tradition and descent from his mother Emma of Normandy.[5] (Although born at Oxford, Edward spent much of his early years with his mother in Normandy).[6] Secondly, Edward had no direct heir, and William was not only his cousin but his most prominent living relative.[7] Thirdly and most importantly, Edward had promised William that he would succeed to the English Crown.[8] Fourthly, Harold Godwinson (later King Harold II) had sworn fealty to William and promised to support William’s claim to the English throne.[9] (Harold later claimed that he had done this under duress at William’s court in Normandy).[10]

Primary evidence in support of Belloc’s third and fourth justifications is provided in the following extract from the seventh book of the Gesta Normannorum Ducum by William of Jumieges:

Edward, king of the English, being, according to the dispensation of God, without an heir, sent Robert, archbishop of Canterbury, to the Duke[11], with a message appointing the Duke as heir to the kingdom which God had entrusted to him. He also at a later time sent to the Duke, Harold[12] the greatest of all the counts in his kingdom alike in riches and honour and power. This he did in order that Harold might guarantee the Crown to the Duke by his fealty and confirm the same with an oath according to Christian usage.[13]

Similarly, William of Poitiers writes that Edward established William as his heir and ‘dispatched Harold to William in order that he might confirm his promise by an oath’.[14]

In historiographical terms, William of Jumieges was a chronicler of high standing, but representing Norman sentiment and opinion regarding these events. William of Poitiers was chaplain to Duke William and writes in a rhetorical style not concealing his admiration of the Duke.[15] Both of these primary sources are likely to be biased in William’s favour.

Tombs has an alternative theory that Harold went to Normandy to secure the freedom of his nephew Hakon, who was held hostage by William.[16] Harold was induced to swear fealty to William,[17] which may have been part of the price for release of his nephew.[18]

Unlike William’s claim to the throne, Harold Godwinson’s claim was not hereditary. Harold himself clearly based it on a deathbed grant by Edward the Confessor.[19] It is likely that while Edward was dying, he entrusted his kingdom to Harold who was in attendance at this time.[20] Edward was dangerously threatened by Harold’s family, whose lands in 1065 were £2,000 per year more valuable than the king’s.[21] Several English sources mention such a grant, and it is almost certainly true.[22] A primary source is the Anglo-Saxon Chronicle ‘E’, where the entry for the year 1066 states:

…and King Edward died, on the eve of the Epiphany[23]; and he was buried on the Feast of the Epiphany[24], in the newly consecrated Church at Westminster. And Earl Harold succeeded to the realm of England, just as the king had granted it to him, and as he had been chosen to the position. And he was consecrated king on the Feast of the Epiphany.[25]

Another primary source is from the annals ascribed to ‘Florence of Worcester’ who was a monk who wrote in the first third of the twelfth century; but had access to earlier materials, including possibly a version of the Anglo-Saxon Chronicles that has been lost. For the year 1066, Florence wrote:

After his [King Edward’s] burial, the under-king, Harold, son of Earl Godwine, whom the king had nominated as his successor, was chosen by the chief magnates of all England[26]; and the same day Harold was crowned with great ceremony by Aldred, archbishop of York.[27]

Again, in historiographical terms, the authors of the Anglo-Saxon Chronicles and ‘Florence of Worcester’, being Anglo-Saxons, are likely to have been biased against William I, in contrast to the Norman authors William of Poitiers and William of Jumieges.

According to Douglas, the indecent haste of these events indicates that Harold’s seizure of the throne was premediated, and that he feared opposition. They bore the appearance of a coup d’état executed with extreme speed and great resolution.[28]

So, it is not hard to see why William thought he had a better claim to the throne than Harold, and why he accused Harold of breaking his oath of fealty. William was plausibly entitled to claim to be England’s rightful king.[29] It had become clear to William that if he was ever to become King of England, it could only be through war. In 1066, he received support and a papal banner from Pope Alexander II, which would have strengthened his resolve.[30]

Harold is killed at the Battle of Hastings

William’s initial reign

After his decisive victory over Harold at the Battle of Hastings 14 October 1066, William marched to Dover, Canterbury, Winchester and ultimately London, with all cities offering submission without battle.[31] Significantly, William was crowned at Edward’s Westminster abbey according to ancient English rites, dating at least from the time of King Edgar (r. 959-975).[32]   William swore Edward’s coronation oath, assuming all the rights and responsibilities of an Old English King;[33]and undertaking to maintain peace and justice.[34] Every effort was made to stress the continuity of English royal rule, rather than a change to Norman French rule.[35] In other words, William argued that he was the legitimate successor of Edward the Confessor, not only de facto by conquest but also de jure by entitlement.[36] According to Carpenter, this strategy was successful in the English acceptance  of William as their king.[37]

In keeping with this theme, William began his early reign with a series of charters addressed to the City of London and various abbots. The thrust of these charters was that the laws and customs that prevailed under Edward the Confessor were to be preserved.[38]

William chose not to ‘unite his kingdoms’ of England and Normandy, instead maintaining separate legal and administrative systems for each.[39] In particular, William did not import Norman law as a body to England.[40] Yet, he later made some significant changes to England in terms of the courts, taxation, inheritances and forest law as will be discussed in the next section below.

Apart from not wanting his accession to look like a French Norman conquest of England, several reasons have been suggested as to why William kept his kingdoms separate. Firstly, Anglo-Saxon government was amongst the most advanced of its time, and superior to that of the Normal French.[41] William gratefully took over the English counties and hundreds with their local courts; the sheriffs; the pervasive ‘king’s peace’ with its specially reserved writs and royal pleas; the geld and coinage; and the Regenbald, which was Edward’s office of chancellor, and with it the power of the sealed writ.[42]

Clanchy has an interesting theory that the Anglo-Saxons traditionally laid claim to the whole island of Britain, even though they could not enforce it in practice. So by establishing himself as the lawful successor to Edward the Confessor, William took on his purported role as king of the whole of Britain, once again in law rather than in practice.[43]

According to Douglas and Greenaway, the ten ‘Laws of William the Conqueror’ may be regarded as part of the cardinal documents relating to English constitutional history of the period 1042-1189.[46] They are probably a compilation of legal enactments made at various times by William I, including his confirmation of earlier laws and customs[47] as follows:

This also I command and will, that all shall have and hold the law of King Edward in respect of their lands and possession, with the addition of those decrees I have ordained for the welfare of the English people.[48]

William added rather than subtracted a relatively small numbers of laws to those of King Edward.[49] One of these was the concept of ‘murdrum’ or murder fine as a form of collective punishment. The fine had to be paid by the hundred or the village in which a murder took place if it could not apprehend the murderer or prove that the victim was English.[50]

The Normans adapted at least one type of English legal document, the writ, to their own purposes.[51] One such writ to the abbot of Bury St Edmund’s ordered the transfer to the king of all lands formerly held of the abbey by those who had died at Hastings fighting against William, on the grounds that Harold was a perjured usurper.[52] Those who had fought and survived were confirmed in their lands alongside all those tenants who had not been at Hastings.[53] These dispossessions sent shockwaves through English society.[54]

William took advantage of the wide taxation base of the Danegeld which was initially set at two shillings per hide[55], but later increased from time to time.[56] As recorded in the Anglo-Saxon Chronicle D: ‘And the king [William] imposed a heavy tax on the wretched people, and nevertheless caused all that he overran to be ravaged’.[57]

From the end of 1067 to 1072 William was primarily engaged in suppressing various English rebellions and consolidating his power.[44] To assist in these endeavours, he employed the Norman French strategy of building castles, not only as a fortified centre of regional administration, but also as a base for conducting military campaigns.[45] This strategy presages the later more transparent Norman influence over England.

William’s later reign

In the early 1070s, William’s attitude seems to have changed. He gave up trying to learn English and stopped using English in official documents. He spent little time in the country from 1072 until his death in Normandy in 1087.[58]

One of William’s first major changes to the law was his ordinance of 1072, which in effect separated common law from the ecclesiastical courts. According to Reppy, this ‘was bound to have a tremendous effect on the future development of English law’. For instance, it enabled the common law courts to resist ecclesiastical invasions of their jurisdiction by the use of ‘writs of prohibition’.[59]

Another significant Norman change was to introduce forest laws, setting aside huge areas as royal forests for both hunting and revenue collection from tenants. Successive kings added more lands, so that by the thirteenth century they covered one quarter of England including the entire county of Essex.[60]

England’s New Forest was proclaimed by William I in 1079

According to Tombs, the ‘Norman Conquest’ annihilated England’s nobility, both physically and financially. Some 4000-5000 thegns were eliminated by battle, exile or dispossession in the largest transfer of property in English history. A generation after the ‘Conquest’, all significant power and wealth was in Norman hands.[61] The Anglo-Saxon Witan as ‘the Council of the English people’ soon disappeared.[62]

William also purged the senior levels of the Church, exemplified by his appointment of the Norman Lanfranc as the Archbishop of Canterbury.[63] The highest ranks of the Church, commanding immense political and economic power (through land tenure) were closed to Englishmen, but the lower levels (including both clergy and monks) remained predominantly English.[64]

According to Douglas, the establishment of this new aristocracy and system of land tenure was the greatest social impact on England by King William.[65] However, in the nineteenth century, most historians held that this change evolved by adaptation from the Old English past; whereas later historians considered it to be a more revolutionary result of the ‘Norman Conquest’. At the time of his book (1964), Douglas thought that the pendulum was swinging back to the first view.[66]

In particular, scholars have long argued that the Normans introduced feudalism to England, but this depends on how ‘feudalism’ is defined. Some more recent historians have urged abolishing the term altogether as a useless construct; but Thomas argues that the Norman changes to land tenure were indeed real and significant. In essence, they created a pyramid of tenancies and subtenancies, with the king as the ultimate landlord. These tenancies were granted on condition of supplying military resources to the king, or scutage payments if this was not possible.[67]

The Anglo-Norman sheriffs took over all the duties of their Anglo-Saxon predecessors, including collecting the geld, executing royal justice, controlling the local courts and keeping the new castles.[68]  If there was no earl in the county, as often there was not, the sheriff was directly answerable to the king,[69] which by such two-way communication made both the sheriff and the king more powerful than in Anglo-Saxon times. According to Douglas, the success to which the ancient English legal system, including the sheriff and local courts were brought to the service of the first Norman king was amongst William’s greatest achievements.[70]

Another of William’s achievements was a substantial reduction in the number of slaves in England from an estimated one in every eleven persons in 1066 to much smaller numbers, as shown in the Domesday Book.[71] William strove to suppress the export of slaves from Bristol; and one of the laws attributed to William specifically forbids the sale of one man by another outside the country.[72]

The Domesday Book

After consulting with his council at Gloucester in 1085, King William commissioned what is now known as the Domesday survey, as published in the Domesday Book.[73] The ‘terms of reference’ of this survey are set out the Anglo-Saxon Chronicle for 1085. William sent his inquisitors all over England, into every county (except the northernmost four), where they compiled a detailed inventory of all landholdings, livestock and revenue potential.[74]  

Of all the land in England surveyed by the Domesday Book, about a fifth was held directly by the King; about a quarter by the Church; and nearly half by the Norman followers of William who were relatively small in number[75] but very powerful.[76] By the end of William’s reign, it was rare to find an English name amongst the landholders recorded in the Domesday Book.[77] Only four Anglo-Saxon nobles remained as major landholders.[78] It has been calculated that only about eight percent of the land in England remained in the possession of this class.[79]

For instance, here is the entry in the Great Domesday Book for land held by my paternal ancestor, Harding of Cranmore.

Harding Domesday

Extract from Somerset chapter of Great Domesday Book

Translation:

‘Harding holds of the abbot CRANMORE. He held it likewise TRE*, and it paid geld for 12 hides. There is land for 10 ploughs. Of this, 6 hides are in demesne, and there is 1 plough and 6 slaves; and 8 villains and 2 bordars and 7 cottars with 3 ploughs. There is a mill rendering 30d, and 50 acres of meadow, and 60 acres of pasture and 100 acres of woodland. It is worth 4l. This land cannot be alienated from the church.’

*Tempore Regis Eduardi (in the time of Edward the Confessor), abbreviation used in the Domesday Book, meaning the period immediately before the Norman conquest of England.

The Domesday survey provided William with a wealth of information with which to better manage his feudal rights and revenues. It gave him vital details about both his own properties and those of his tenants. It enabled him to reassess the level of the geld and who should pay it. When William wanted to seize estates after a tenant’s forfeiture or death, he now knew what to take.[80]

Concluding remarks

The extent and durability of the legal and administrative changes that William I made is contested by different historians. For instance, Douglas surmises that William’s overall strategy was to effect major change with the least possible disturbance to English customs.[81] Thomas argues that while the Normans brought varying levels of change to English society, they maintained basic legal continuity and hardly changed the structure of government at all.[82] On the other hand, Carpenter argues that the changes brought by William’s exploitation of his new feudal rights, especially to land tenure, were momentous. [83]

As I have endeavoured to show in this essay, my own view is that the reign of William I can be divided into two phases. The first phase until the early 1070s was where he wanted to emphasise the legitimacy of his claim to the throne by maintaining English laws and customs. The second phase was where he seemingly abandoned this strategy in favour of making some major changes, at least some of which were due to Norman French influence.

Bibliography  

Primary sources

Anon. ‘Anglo-Saxon Chronicle D’. In Douglas, David C. and Greenaway, George W. (eds.) English Historical Documents Vol. II (1042-1189). Eyre & Spottiswoode: London, 1964.

Anon. ‘Anglo-Saxon Chronicle E’. In Douglas, David C. and Greenaway, George W. (eds.) English Historical Documents Vol. II (1042-1189). Eyre & Spottiswoode: London, 1964.

Anon., Great Domesday Book, Somerset Folio: 90v, National Archives: Kew, 1086.

Anon. ‘The Laws of William the Conqueror’. In Douglas, David C. and Greenaway, George W. (eds.) English Historical Documents Vol. II (1042-1189). Eyre & Spottiswoode: London, 1964. pp. 399-400.

Florence of Worcester, ‘Select passages from the annals ascribed to Florence of Worcester’. In Douglas, David C. and Greenaway, George W. (eds.) English Historical Documents Vol. II (1042-1189). Eyre & Spottiswoode: London, 1964. Pp.204-214.  

William I (1072) ‘The Ordinance of William the Conqueror’ in Reppy, Alison. Ordinance of William the Conqueror (1072) – Its Implications in the Modern Law of Succession. Ocean: New York, 1954.  

William of Jumieges (c.1070) ‘Description of the Invasion of England by William the Conqueror’. In Douglas, David C. and Greenaway, George W. (eds.) English Historical Documents Vol. II (1042-1189). Eyre & Spottiswoode: London, 1964.

William of Poitiers (c. 1071) ‘The Deeds of William, Duke of the Normans and King of the English’. In Douglas, David C. and Greenaway, George W. (eds.) English Historical Documents Vol. II (1042-1189). Eyre & Spottiswoode: London, 1964.

Secondary sources

Barlow, Frank (2004). ‘Edward (St Edward; known as Edward the Confessor)’. Oxford Dictionary of National Biography https://doi.org/10.1093/ref:odnb/8516 dated 25 May 2006, accessed 19 October 2018.

Bates, David., William the Conqueror. Yale University Press: New Haven, 2016.

Belloc, Hilaire. William the Conqueror. Peter Davies: Edinburgh, 1933.

Carpenter, David., The Struggle for Mastery: Britain 1066-1284. Penguin: London, 2004.

Clanchy, M.T. England And Its Rulers (Fourth Edition). Wiley Blackwell: Chichester, 2014.

Douglas, David C. William the Conqueror – The Norman Impact Upon England. Eyre & Spottiswoode: London, 1964.

Douglas, David C. and Greenaway, George W. (eds.) English Historical Documents Vol. II (1042-1189). Eyre & Spottiswoode: London, 1964.

Reppy, Alison. Ordinance of William the Conqueror (1072) – Its Implications in the Modern Law of Succession. Ocean: New York, 1954.  

Thomas, Hugh M. The Norman Conquest: England After William the Conqueror, Rowman & Littlefield: Lanham, 2008.

Tombs, Robert. The English & Their History. Penguin: London, 2014.

Endnotes: 

[1] The appropriateness of William’s epithet ‘conqueror’ is also contested by some historians. For instance, Hilaire Belloc argues that because William was Edward the Confessor’s rightful heir, there was no ‘conquest’ and William was not a ‘conqueror’ (Hilaire Belloc, William the Conqueror, Edinburgh, 1933. pp.  47-50, 127-128).  For this reason, I use the neutral title of William I in this essay.

[2] The Witan or ‘Council of the English people’ was a gathering of thegns (the majority of the aristocracy below the ranks of ealdormen and high-reeves) and prelates, which was summoned by the king at various places to give advice, settle disputes, try cases of treason or to endorse royal acts. It was crucial in times of danger or disputed succession (Tombs 2014: 24).

[3] Hugh M. Thomas, The Normal Conquest, Lanham, 2008. p. 17.

[4] Tombs, Robert, The English & Their History. London, 2014. p.36.

[5] Hilaire Belloc, William the Conqueror, Edinburgh, 1933. p.46.

[6] Frank Barlow, Oxford Dictionary of National Biography, 2006. pp. 1-3; Thomas, The Normal Conquest, p.12; David C. Douglas, William the Conqueror – The Norman Impact Upon England. London, 1964. pp.162, 165.

[7] Belloc, William the Conqueror, p.47.

[8] Belloc, William the Conqueror, pp.48-50; Thomas, The Normal Conquest, pp.18, 23; Tombs, The English & Their History, p. 40; Douglas, William the Conqueror – The Norman Impact Upon England. p.169.

[9] Belloc, William the Conqueror, pp. 70, 74; Thomas, The Normal Conquest, p.23; Douglas, William the Conqueror – The Norman Impact Upon England. p.176.

[10] Belloc, William the Conqueror, pp. 80-81; Thomas, The Normal Conquest, p.24; Douglas, William the Conqueror – The Norman Impact Upon England. p.177.

[11] William, Duke of Normandy.

[12] Harold Godwinson, later King Harold II.

[13] William of Jumieges, ‘Description of the Invasion of England by William the Conqueror’, c.1070. p.215.

[14] William of Poitiers, ‘The Deeds of William, Duke of the Normans and King of the English’ c. 1071. p217.

[15] David C. Douglas, and George W. Greenaway, (eds.) English Historical Documents Vol. II (1042-1189). London, 1964. pp. 215, 217.

[16] Tombs, The English & Their History, p. 40; David Bates, William the Conqueror. New Haven, 2016. p.199.

[17] Tombs, The English & Their History, p. 40.

[18] David Carpenter, The Struggle for Mastery: Britain 1066-1284. London, 2004. p.68.

[19] Hugh M. Thomas, The Normal Conquest, Lanham, 2008. p. 17;

[20] Barlow, Oxford Dictionary of National Biography, 2006. p. 12

[21] David Carpenter, The Struggle for Mastery: Britain 1066-1284. London, 2004. p.67.

[22] Douglas, William the Conqueror – The Norman Impact Upon England. p.252; Thomas, The Normal Conquest, pp.17-18; David Bates, William the Conqueror. New Haven, 2016. p.213.

[23] 5 January 1066.

[24] 6 January 1066.

[25] Anon. Anglo-Saxon Chronicle D. 1066. in Douglas and Greenaway, p.142.

[26] Likely to have been the Witan (see footnote 2).

[27] Florence of Worcester, Annals, p.212

[28] Douglas, William the Conqueror – The Norman Impact Upon England. p.182.

[29] Tombs, The English & Their History, p. 42.

[30] Douglas, William the Conqueror – The Norman Impact Upon England. pp.169, 188; David Bates, William the Conqueror. New Haven, 2016. pp.165, 223.

[31] Douglas, William the Conqueror – The Norman Impact Upon England. p.206.

[32] Ibid., p.248.

[33] Ibid., pp.206-207.

[34] David Carpenter, The Struggle for Mastery: Britain 1066-1284. London, 2004. p.62.

[35] Douglas, William the Conqueror – The Norman Impact Upon England. p.248.

[36] Ibid., p.250.

[37] David Carpenter, The Struggle for Mastery, p.75.

[38] Douglas, William the Conqueror – The Norman Impact Upon England. p.258.

[39] Thomas, The Normal Conquest, p 60.

[40] Alison Reppy, Ordinance of William the Conqueror (1072) – Its Implications in the Modern Law of Succession, New York, 1954. p.4; Thomas, The Normal Conquest, p 59.

[41] Thomas, The Normal Conquest, pp.10, 59.

[42] David Carpenter, The Struggle for Mastery, pp.90-92.

[43] Clanchy, M.T. England And Its Rulers, Chichester, 2014. p.20.

[44] Douglas, William the Conqueror – The Norman Impact Upon England. p.211.

[45] Ibid., p.216.

[46] David Douglas and George Greenaway (eds.) English Historical Documents Vol. II (1042-1189). London, 1964. p.399.

[47] Ibid., p. 204.

[48] Anon. ‘The Laws of William the Conqueror’ in Douglas and Greenaway, p.400.

[49] Thomas, The Normal Conquest, p.84.

[50] David Carpenter, The Struggle for Mastery, p.102; Thomas, The Normal Conquest, p.85.

[51] Thomas, The Normal Conquest, p.10.

[52] David Carpenter, The Struggle for Mastery, p.76.

[53] Belloc, William the Conqueror, p. 114; David Bates, William the Conqueror. New Haven, 2016. pp.286-287.

[54] David Carpenter, The Struggle for Mastery, p.76.

[55] A hide or carucate varied in size but was often around 120 acres (David Carpenter, The Struggle for Mastery, p.63).

[56] Douglas, William the Conqueror – The Norman Impact Upon England. p.300.

[57] Anon. Anglo-Saxon Chronicle D.  1067. p.147.

[58] Tombs, The English & Their History, p. 44; Douglas, William the Conqueror – The Norman Impact Upon England. p.211.

[59] Reppy, Ordinance of William the Conqueror (1072) – Its Implications in the Modern Law of Succession, New York, p.5.

[60] Thomas, The Normal Conquest, p.60.

[61]Tombs, The English & Their History, p. 44-45; Thomas, The Normal Conquest, p.47.

[62] Ibid., p. 47.

[63] David Bates, William the Conqueror. New Haven, 2016. p.329.

[64] Tombs, The English & Their History, p. 50.

[65] Douglas, William the Conqueror – The Norman Impact Upon England. pp.275, 280.

[66] Ibid., p.276.

[67] Thomas, The Normal Conquest, pp.71-73, 82-83.

[68] Douglas, William the Conqueror – The Norman Impact Upon England. p.298.

[69] David Carpenter, The Struggle for Mastery, p.64.

[70] Douglas, William the Conqueror – The Norman Impact Upon England. pp.306, 308.

[71] Thomas, The Normal Conquest, p.98.

[72] Anon. ‘The Laws of William the Conqueror’ in Douglas and Greenaway, p.400.

[73] David Bates, William the Conqueror. New Haven, 2016. pp.462-463.

[74] Anon. ‘Anglo-Saxon Chronicle E’ in Douglas and Greenaway, p.161.

[75] Less than 180 tenants-in-chief are recorded in the Domesday Book as possessing estates rated an annual value of more than ₤100 (Douglas, William the Conqueror – The Norman Impact Upon England. p.269).

[76] Douglas, William the Conqueror – The Norman Impact Upon England. p.269.

[77] Ibid., p.266.

[78] David Carpenter, The Struggle for Mastery, p.79.

[79] Douglas, William the Conqueror – The Norman Impact Upon England. p.266.

[80] David Carpenter, The Struggle for Mastery, p.104.

[81] Douglas, William the Conqueror – The Norman Impact Upon England. p.290.

[82] Thomas, The Normal Conquest, pp.87, 143, 144.

[83] David Carpenter, The Struggle for Mastery, p.87.

3 Comments

Filed under Essays and talks

The Harm of Racial Slurs

by Tim Harding, B.Sc. B.A. (philosophy)

In this paper I would like to analyse the derogatory nature of racial slurs.  In particular, I shall try to answer the following questions: ‘What harm do racial slurs do?’ and ‘Do racial slurs deserve protection under the principle of free speech?’.  In answering these questions, I enlist the methodology of speech act theory.  I argue that racial slurs are perlocutionary speech acts that result in harmful consequences to target groups and individuals. They are not just offensive insults or taboo words.  There are widely-accepted exceptions to the right of free speech for words that cause harm, and I argue that racial slurs should be included amongst these exceptions.

By the term ‘racial slur’, I mean derogatory words targeting the race of a person or group.  Examples include ‘n**g*r’, ‘k*k*’, ‘ch*nk’ and ‘w*g’.  I do not include derogatory words targeting a person’s religious, political or other beliefs.  This distinction is important, because the ability to criticise beliefs and ideas is an essential component of free speech. Also, people can change their beliefs but not their race. So, on this basis, I would not count a derogatory word like ‘jihadist’ as a racial slur.

Another point of clarification is that in this paper, I intend to discuss the philosophical principles of racial slurs and free speech, rather than the policy or legal aspects of these concepts.  The question of whether racial slurs should be permitted, regulated or prohibited by law is one that lies more within the scope of political philosophy or jurisprudence than the philosophy of language.

I would like to illustrate the conflict between racial slurs and free speech by quoting some examples of actual slurs gathered by Stanford law professor Charles Lawrence III (1990: 431).  Each example is followed by a typical response, appearing in italics, that Lawrence has personally heard many times over against taking any action against such racial slurs.

  • Dartmouth College:

Black professor called ‘a cross between a welfare queen and a bathroom attendant’.

Yes, speech is sometimes painful Sometimes it is abusive. That is one of the prices of a free society.

  • Purdue University:

Counselor finds ‘Death N**g*r’ scratched on her door.

More speech, not less, is the proper cure for offensive speech.

  • Smith College:

African student finds message slipped under her door that reads, ‘African N**g*r do you want some bananas? Go back to the Jungle.’

Speech cannot be banned simply because it is offensive.

The common point being made in these responses is that such racial slurs are merely offensive insults, and that the principle of free speech outweighs taking any action against them.  That is a point that I would like to challenge in this paper.

To assist my analysis, let me contrast the impacts of these racial slurs with some non-racial slurs.  Whilst all slurs are by definition insulting to their targets, there are at least some non-racial slurs that probably cause little or no harm.  For instance, slurs like ‘ignorant’, ‘stupid’ and ‘lazy’ are often heard in heated debates; and whilst they may cause offence, the targets of such slurs rarely if ever claim to be harmed by them.  So, what are the harms caused by racial slurs?

I shall commence my answer to this question by referring to speech act theory.  In linguistics and the philosophy of language, a ‘speech act’ is an utterance that has a performative function.  In colloquial terms, a speech act is ‘doing something’ as well as ‘saying something’.  Since the middle of the twentieth century, recognition of the significance of speech acts has demonstrated the ability of language to do things other than to describe reality or states of affairs (Green 2017:1).

In his seminal 1962 work ‘How To Do Things with Words’, John Austin developed his theory of performance utterances.  In a distinct break from the logical positivist view of statements, Austin (1962: 4-5) argues that there are meaningful sentences without truth values.  He claims that utterances can be found such that:

  1. they do not ‘describe’ or ‘report’ or constate anything at all, are not ‘true or false’; and

  2. the uttering of the sentence is, or is a part of, the doing of an action, which again would not normally be described as saying something.

Examples that Austin gives of such utterances include saying ‘I do’ or ‘I will’ during a marriage ceremony, the naming of a ship at its launch and the offering of a wager.  He calls such statements ‘performance sentences’ or ‘performance utterances’ (Austin 1962: 6).  The action that is performed when a ‘performative utterance’ is said belongs to what Austin calls a ‘speech act’ (Austin 1962: 40).  Austin distinguished between three types of speech acts as follows:

  • a locutionary act is the uttering of a sentence with a certain sense and reference, that is, meaning;

  • an illocutionary act has a certain performative force, such as informing, ordering, warning or promising;

  • a perlocutionary act results in consequences, such as persuading, deterring, threatening, and as I discuss below, subordinating (Austin 1962: 109).

Verrochi (2015:1) enlists Austin’s theory to provide us with a methodology for actively addressing the harm that is done by racial slurs.  She argues that attempts to describe racial slurs as merely insulting or offensive, or to locate the harm in the intention of the speaker (that is, racist attitudes) are inherently problematic (Verrochi 2015:20).  In terms of speech act theory, these could be categorised as locutionary or illocutionary speech acts.  Instead, Verrochi advocates a perlocutionary speech act categorization of racial slurs:

What is needed and warranted is a system that locates the harm of [racial] hate speech not in the feelings, emotions, or thoughts of the audience, nor in the heart, intention, or thoughts of the speaker, but in the force of certain speech acts — when uttered in the ‘right’ context by people with the ‘right’ authority — to do as much as they say (Verrochi 2015:20).

Verrochi (2015: 15-20) argues that racial slurs that target historically subordinated groups (such as African-American slaves and their descendants) have the effect of intimidating these groups and re-establishing a harmful social hierarchy based on race.  In her opinion, these perlocutionary speech acts result in racism (by which I assume she means racial stigmatisation and discrimination) and promote racial subordination.

Similarly, Tirrell (1999: 43) states that the perlocutionary effect of racial slurs is clear: ‘they are angry put-downs that attempt to reduce the person to one real or imaginary feature of who they are’.  She argues that it is not simply because such a particular speaker has a particular (racist) attitude that a racial slur is harmful (Tirrell 1999: 44).  In other words, racist attitudes are not in themselves harmful – it is the perlocutionary speech acts that do the harm.  She says that:

…the heart of the expression is its designating of the person as subordinate…To call someone a ‘n**g*r’ today is at minimum to attribute a second-class status to him or her, usually on the basis of race and, arguably, to take that lower status to be deserved (Tirrell 1999: 45).

So rather than racism being an attitude or mental state of a speaker, Tirrell (1999: 45-46) regards racism to be ‘a structure of social practices that supports and enforces the subordination of the well-being of some races to the well-being of members of other races’.  (For instance, slavery in the United States had not only a negative effect on the well-being of the slaves, but it also had a positive effect on the well-being of the slave-owners and their descendants for all the free labour from which they profited).  In terms of speech act theory, Tirrell argues that ‘the social, psychological and economic practices of treating dark-skinned African-Americans as less valuable that light-skinned European-Americans give content and force to the term n**g*r’ (Tirrell 1999:49).

Tirrell (1999: 58) examines a theory that the derogation of the term n**g*r is a pragmatic effect, not a semantic aspect of this term.  Because if the derogation were a semantic aspect of the term, there could be no non-derogatory use of it, and yet there is.  Some African-Americans use this term amongst themselves as a strategy for the reclamation and blunting of the racial slur, or even as a term of endearment.  According to this theory, such pragmatic factors are the means by which the derogatory force is detached (Tirrell 1999:58).

Tirrell adds a corollary that when people who are not African-Americans use the term, it is impossible for the term not to carry derogatory force (Tirrell 1999:58), as argued by Whiting and further discussed below.  In speech act terms, Tirrell says that ‘individual speakers cannot escape the socially established meaning of their utterances, except occasionally by the grace of the communities in which they live and speak’ (Tirrell 1999: 61).  In this way, Tirrell is implying a perlocutionary force of racial slurs.

While Verrochi and Tirrell have focused on the harm caused by racial slurs to target groups, Delgado (1982: 135-149) has identified various sources of harm to individuals.  He argues that ‘such language injures the dignity and self-regard of the person to whom it is addressed, communicating the message that distinctions of race are distinctions of merit, dignity, status, and personhood’ (Delgado 1982: 135-136).

The psychological harms causes by racial stigmatisation are often much more severe than other insults, because membership of a racial minority is neither self-induced nor alterable (Delgado 1982: 135-136).  They not only impair the victim’s capacity to form close interracial relationships, but affect even their relationships with their own group.  Such psychological harms can result in mental and psychosomatic illnesses (Delgado 1982: 137).

Racial stigmatisation can also damage a victim’s pecuniary interests, by limiting his or career prospects and social mobility.  In this way, it can be seen as a force used by the majority to preserve an economically advantageous position for themselves (Delgado 1982: 139-142).

Finally, Delgado (1982: 142-143) argues that racial slurs have an even greater impact on children than adults.  Empirical studies have shown that the effects of racial slurs are discernable early in life, with minority children exhibiting distress or even self-hatred because of their colour; and majority children associating dark skin with undesirability and ugliness (Delgado 1982: 142).

In contrast to Verrochi and Tirrell, Hom (2008: 416-440) argues that the semantic strategy fares better than the pragmatic strategy for explaining how racial slurs or epithets work.  The difference is that according to the semantic strategy, their derogatory content is fundamentally part of their literal meaning; whereas according the pragmatic strategy, their derogatory content is derived from how they are used (Hom 2008: 416).

A problem for the semantic strategy is that it fails to explain the non-derogatory uses of racial slurs referred to above.  On Hom’s view the derogatory content of a racial slur (he calls them epithets) are causally determined in part by factors external to, and sometimes unknown by, the speaker (Hom 2008: 430).  Hom calls this view combinatorial externalism (CE), where the meanings of racial slurs are supported and semantically determined by their corresponding racist institutions (Hom 2008: 431).  In this way, racial slurs ‘express derogatory semantic content in every context, but they do not actually derogate their targets in every context’.  In other words, racial slurs are words with derogatory content; speakers derogate by using words with such contents (Hom 2008: 432).

To provide an example of what Hom probably means here, let is consider two sentences provided by Elizabeth Camp (2013: 330):

(1) Isaiah is a k*k*.

(2) Isaiah is not a k*k*.

On Hom’s view, sentence (1) is derogatory towards Isaiah, but sentence (2) is not, because Isaiah is not being described by a racial slur.  On the other hand, if these sentences are view pragmatically rather than semantically, they could be interpreted as meaning the following:

(3) Isaiah is Jewish.  And by the way: boo to Jews!

(4) Isaiah is not Jewish.  And by the way: boo to Jews!

So, by this interpretation, the use of the word ‘k*k*’ in sentence (2) is still derogatory towards Jews in general, even though it is not derogatory towards Isaiah in particular.  Camp calls this view subjectivist expressionism (Camp 2013: 331-332), although it is not a view that she necessarily subscribes to herself.

Whiting (2013: 366–368) supports the second interpretation above, and argues that it is typically no less derogatory to make negative claims using racial slurs than it is to make positive claims using them, whereas Hom’s combinatorial externalism suggest otherwise.  Whiting presses his point by providing a further example:

  • A. The US President is a n**g*r.

  • B. A said that the US President is a n**g*r.

Whiting (2013: 368) argues that intuitively, B’s utterance is racially derogatory, but combinatorial externalism does not seem able to explain why this so.  He says that on Hom’s account, B’s report in factually true and non-derogatory.  Whiting (2013: 368) reports Hom as later acknowledging this problem, and trying to explain that a sentence like (B) as not derogating but causing offence.  Whiting (2013: 368) responds by arguing that if a sentence like (B) was uttered to an audience of white racists, none of them might take offence but the utterance is still a derogatory racial slur.

Anderson and Lepore (2013: 350–363) reject the assumption that we need to understand the use of a racial slur as expressive of derogatory content.  Instead, they propose what they call ‘prohibitionism’ which is the view that racial slurs are prohibited or taboo words, and so a violation of this taboo might provoke offence.  They argue that this taboo is ubiquitous, that is, embedding a racial slur inside a sentence does not immunise its users from transgression (Anderson and Lepore 2013: 353).  Presumably, this taboo includes the reclamatory or affectionate use of the word ‘n**g*r among African-Americans.

In contrast, Whiting (2013: 368-369) argues that prohibitionism seems false, because it is possible for there to be racial slurs in the absence of a taboo or prohibition.  He provides the following example:

Imagine a deeply racist society in which the use of ‘n**g*r’ is not prohibited but nonetheless is expressive of racist thoughts or attitudes concerning those to whom its neutral counterpart applies. Members of the society derogate in using ‘n**g*r’, even though they do not violate any prohibition. The word, as used in that society, is surely a slur.

In my view, the above scenarios provided by Camp and Whiting neatly separate the derogatory force of racial slurs from merely causing offence or uttering taboo words.  The significance of this distinction is that, as we saw in the examples provided by Charles Lawrence earlier in this paper, the argument that racial slurs are merely offensive insults is used as a defence against taking any action regarding the use of such slurs.  Put simply, this argument is that insult and offence are common components of public debate, and to restrict their use would constitute an unjustified restriction of free speech.

On the other hand, it is commonly accepted (including in law) that free speech is not absolute.  There are well-known exceptions to the right of free speech, for instance, in cases of public safety (shouting ‘Fire!’ in a crowded theatre), causing riots, incitement to crime and defamation.  The rationale behind these exceptions is that such statements cause harm.  But as I have argued in this paper, racial slurs also cause harm, possibly to a lesser extent than threatening public safety, causing riots or inciting crime, but harm nonetheless.  The harm caused by racial slurs undermines the argument that racial slurs should be protected as free speech, on the grounds that they merely cause insult and offence in a similar manner to non-racial slurs.  (To be clear, I am not generalising that all harmful speech is impermissible. I have given reasons above why I think that racial slurs cause sufficient harm to be impermissible).

In conclusion, speech act theory can provide us with a methodology for showing how racial slurs cause harm to target groups and individuals, rather than just insult or offence.  I have argued that racial slurs are perlocutionary speech acts, meaning that they result in consequences (in this case harmful consequences) rather than just express racist attitudes or emotions.  The harm lies in these adverse consequences to their intended targets, rather than in the racist attitudes or emotions of the speaker.  Because there are widely-accepted exceptions to the right of free speech where sufficient harm is caused, I have argued that racial slurs should be included in such exceptions.

Bibliography

Anderson, L. and Lepore, E. 2013 ‘What Did You Call Me? Slurs as Prohibited Words’. Analytic Philosophy 54: 350–363.

Austin, John L. How To Do Things with Words. Oxford: Oxford University Press, 1962.

Camp, Elisabeth. 2013 ‘Slurring Perspectives’. Analytic Philosophy Vol. 54 No. 3 September 2013 pp. 330–349

Delgado, Richard. 1982 ‘Words That Wound: A Tort Action For Racial Insults, Epithets, and Name-Calling’. Harvard Civil Rights-Civil Liberties Law Review, Vol. 17 (1982).

Green, Mitchell, ‘Speech Acts’. The Stanford Encyclopedia of Philosophy (Winter 2017 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2017/entries/speech-acts/&gt;.

Hom, Christopher. 2008. ‘The Semantics of Racial Epithets. The Journal of Philosophy, Vol. 105, No. 8 (Aug., 2008), pp. 416-440

Lawrence, Charles R. 1990 ‘If He Hollers Let Him Go: Regulating Racist Speech on Campus’. Duke Law Journal, June 1990 pp. 431-483.

Tirrell, Lynne 1999. ‘Derogatory Terms: Racism, Sexism and the Inferential Role Theory of Meaning’ in Oliver, Kelly and Hendricks, Christina (eds.), Language and Liberation: Feminism, Philosophy and Language, SUNY Press, 1999. pp 41-79.

Verrochi, Meredith. 2015.  ‘Uncooperative Engagement: An Active Response To Hate Speech’. Ph.D Dissertation, submitted to Michigan State University. <https://d.lib.msu.edu/etd/3808>  Viewed 25 May 2018.

Whiting, Daniel 2013. ‘It’s Not What You Said, It’s The Way You Said It: Slurs And Conventional Implicatures’. University of Southampton Analytic Philosophy Vol. 54 No. 3 September 2013 pp. 364–377.

Copyright notice: © All rights reserved. Except for personal use or as permitted under the Australian Copyright Act, no part of this website may be reproduced, stored in a retrieval system, communicated or transmitted in any form or by any means without prior written permission. All inquiries should be made to the copyright owner, Tim Harding at tim.harding@yandoo.com, or as attributed on individual blog posts.

If you find the information on this blog useful, you might like to consider making a donation.

Make a Donation Button

4 Comments

Filed under Essays and talks

A joint response to Gary Bakker on scientism and philosophy

By Tim Harding and James Fodor

Introduction

In the last issue of The Skeptic (December 2017, pages 56-59), Gary Bakker criticises an essay from the previous September issue of The Skeptic by Tim Harding. This essay is headed ‘A Step Too Far’ (pages 32-35), and argues against the relatively recent advent of the ideology known as scientism, which in a nutshell claims that science is the only legitimate domain of objective knowledge. At several points, Tim’s essay cites and quotes an earlier essay by James Fodor in the Australian Rationalist magazine (December 2016, pages 32-35) titled ‘Not So Simple’, which was also criticised by Bakker. That is why we have prepared this joint response to Bakker’s article.

We think that it is incumbent on a critic to understand and come to grips with what one is criticising. A failure to do so is a recipe for misrepresentation of the arguments one is attempting to refute. In this case, Bakker has not only misrepresented many of our positions and arguments, but more fundamentally he has misrepresented the nature of the topics we are arguing about, including science, scientism, rationality and philosophy.

One of Bakker’s major misunderstandings seems to be about philosophy. To characterise philosophy as what happens at amateur ‘Philosophy Cafes’ is disingenuous, highly misleading and frankly absurd. It is like defining psychology as what is discussed in amateur pop psychology or self-help groups. Philosophy is a serious academic discipline which is taught at almost all of the world’s leading universities. The main sub-fields of academic philosophy include logic, metaphysics, epistemology, ethics, aesthetics, political philosophy, the philosophy of language and the philosophy of science. While logic is of course used by many disciplines including science and mathematics, the study and development of logic itself is actually a branch of philosophy. Until only a few hundred years ago, science was also a branch of philosophy, known as ‘natural philosophy’. Experimental scientific methods were initially developed by the English philosophers Robert Grossteste and Roger Bacon in the 13th century, as explained in an essay by Tim in the June 2016 issue of The Skeptic. Since the branching off of science from philosophy beginning around the 17th century, philosophers have been quite happy to leave empirical observations and experiments to the scientific domain. As such, any competition between philosophy and science exists only in the minds of scientism advocates like Bakker. This imagined competition stems from a lack of understanding of the nature of philosophy. In particular, philosophy of science does not attempt to undermine or replace science, but rather seeks to understand the nature of science and how and why it works as well as it does.

In the present piece, we will critically analyse the arguments made by Bakker in his article. We will begin with an examination of how Bakker has misrepresented our arguments, and failed to understand what we were actually arguing. We will then discuss three key issues raised by Bakker: how moral and ethical questions should be resolved, the justification of science as ‘what works’, and the notion that philosophy has never made any contributions to human knowledge. In each case we argue that not only does Bakker fail to provide convincing reasons for his contention, but also that he faces powerful objections that he fails to address. In discussing each of these specific topics, we also hope to illustrate that the only way Bakker could hope to respond to our objections is by engaging in philosophical argumentation, which would thereby critically undermine his main thesis that such discourse has no value.

Misrepresenting our arguments

Throughout his article, Bakker consistently misstates and misrepresents our arguments. He begins by characterising our writings as exemplary of what he terms ‘small r rationalism’, which according to Bakker entails ‘agreement with Immanuel Kant who argued that knowledge can be innate, can be acquired through pure reasoning, and that philosophical enquiry and argument alone can answer the Big Questions’. This entire concept is a red herring since neither of us is a Kantian, nor are we defending a rationalist as distinct from empiricist approach in our writings. Furthermore, it is logically invalid to infer our wider philosophical positions from two specific essays we have written about narrow topics. In particular, any attempt to characterise us as anti-empiricists is a bit rich given our backgrounds in science and skepticism. This ‘small r rationalism’ is contrasted with ‘capital r Rationalism’, which Bakker says is defined by the Rationalist Society of Australia as holding that ‘knowledge is best acquired by use of the scientific method, which is an inseparable combination of reason plus observation or experiment’. However, Bakker does not provide any reference for his definition of ‘Rationalism’, and we cannot find his quoted definition on the Rationalist Society of Australia website. It remains unclear, therefore, where Bakker’s concepts of rationalism (small or capital ‘r’) have come from.

Later in his piece, Bakker castigates James for his critique of ‘crude positivism’, which Bakker says ‘sounds like a straw man’, and asks ‘why not critique “refined positivism”?’. In his original article, however, James explained that the reason he discusses ‘crude positivism’ is because he wanted to address the ‘patchwork of overlapping ideas and perspectives’ that in his experience seemed quite prominent in rationalist/skeptic/freethought communities. Neither of us criticised ‘positivism’ as such in our essays – another red herring on Bakker’s part. A response to more sophisticated philosophical accounts of positivism would require much more space than available for James’ short article, and furthermore such accounts have already been written elsewhere. All this should have been clear enough after a careful reading of our essays, where we both outline clearly what James means by the term ‘crude positivism’. Consulting ‘Mr Google’ is no substitute for carefully reading the argument one intends to respond to.

James also does not say that scientism claims that ‘the humanities should adopt the scientific method’, and even though this appears in quotes in Bakker’s piece, this phrase is not present in either Tim’s essay or James original essay.  So this is an actual misquotation by Bakker – even worse than a misrepresentation. The closest statement to it was one by Prof. Tom Sorrell who was cited on page 33 of Tim’s essay using different words, albeit with a similar meaning. Rather, what James in fact argued is that ‘if the superior status of the natural sciences is based on their superior adherence to a particular set of epistemological principles, then it is those principles themselves that are the true bearer of the superior status… applying these same principles to any disciple should yield knowledge justified to similarly rigorous standards.’ James’ point here was simply that the principles of sound inquiry are broadly applicable across all disciplines. Thomas Huxley expressed this idea well: ‘the man of science simply uses with scrupulous exactness the methods which we all, habitually and at every minute, use carelessly’.

Finally, Tim does not equate ‘science’ with ‘the natural sciences’ in his essay. This comment by Bakker appears to be a misunderstanding of a statement by Prof. Tom Sorrell that Tim cites on page 33.

Bakker on ethics

Bakker attempts to give an account as to how ‘moral and ethical questions’ can be answered without recourse to philosophical argumentation. He argues that we should resolve these questions by the following procedure:

  1. Realise that moral questions are not ‘answerable by reference to some absolute, transcendent set of rules’.
  2. Instead, focus on what principles and laws ‘best achieve society’s goals’.
  3. Engage in systematic observation of ‘what human beings are actually found to value’ (as individuals and as groups).
  4. Determine (empirically) which codes of law, ethics, and mores will work best to achieve these goals, and implement those.

The first point appears to constitute an endorsement of moral anti-realism, the position that there are no objectively existing moral states of affairs. This is a philosophical position that stands in contrast to many forms of moral realism, which affirm the existence of objective moral facts while differing on the form that such moral facts take. Bakker not only fails to notice that he is making a philosophical claim, but also offers no reason at all to accept his assertion. His second point appears to be an endorsement of some form of cultural relativism, the view that what is good or moral is dependent upon the goals and standards of a particular culture. Later though Bakker also mentions ‘the goals… of humanity’, so he may not be a cultural relativist, but providing an account of what it could mean for ‘humanity’ to have goals, let alone what such goals might be, is not even attempted. Either way, these are philosophical positions that require defence, and cannot simply be asserted without argument.

Aside from the lack of substantive arguments for his position, several critical objections can be raised against his views. For example, in cases of genocide or slavery, societies have determined that their goals are best met by engaging in actions we would regard as immoral. On what basis, in Bakker’s account, can we say that they are morally wrong in doing so? Bakker’s account also renders apparently very important questions about what goals we ought to have as unintelligible, since on his view this would trivially amount to asking whether having a certain goal would help us to achieve that goal. Perhaps Bakker’s account can be rescued by developing sufficiently rich concepts about what is meant by a ‘goal’, how competing goals within a group are integrated, what kinds of goals are most pertinent, etc. All of the extra conceptual work and articulation of distinctions and giving of reasons for one’s positions, however, is precisely what one does in doing philosophy. The poverty of Bakker’s ‘solution’ to the problems posed by morality and ethics points clearly and directly for exactly why we need philosophy.

In response to Bakker’s third point, even the notion of determining empirically what people actually value is not the straightforward scientific exercise Bakker implies it to be.  It sounds like he is advocating some sort of populist opinion poll or focus group approach to ethical questions. Whilst these might provide opinions about particular ethical issues, they are unlikely to result in more generalised frameworks or principles that can be applied to other ethical issues. Anybody who has seriously studied ethics will be aware that some ethical problems can be very complex, and not conducive to solving by public opinion surveys.

Science and pragmatism

Bakker defines science with prime reference to ‘what works’, arguing that ‘[science] is a method of inquiry, and it is the only one we have found so far that gives us reliable, reproducible, consensual, evidence-backed, applicable knowledge, in any “field of inquiry”. In fact, this is so almost by definition. If a process – a particular method – works, we include it in the scientific method’. In our earlier essays we raised the objection that this is an insufficient basis for defending the superiority of science, since some scientific theories that ‘worked’ and were ‘useful’ nevertheless have been shown to be incorrect. Bakker responds that this is not a reason for doubting pragmatic justifications of the superiority of science, since no disciple outside of science can do any better. As he says: ‘no other method has ever shown a scientifically-derived explanation that works to be wrong’.

The problem with this response is that it ignores most of James’ argument. In his argument he explained that there are two main ways of understanding the goal of science. One view, realism, holds that science attempts to arrive at accurate (albeit usually approximate) descriptions of the way reality actually is. If this is a key goal of science, then obviously there is more to good science than just being ‘useful’, as demonstrated by the fact that many useful scientific theories have nevertheless turned out not to accurately describe reality. Bakker, however, doesn’t seem to be persuaded by this, so perhaps he is an instrumentalist. Instrumentalism holds that science does not attempt to tell us about the way the world really is, but merely to deliver useful models and descriptions that make predictions and/or serve practical ends. Like other advocates of scientism, however, Bakker has also claimed that ‘all meaningful philosophical problems are actually scientific problems’. This seems to pose a problem since a great many philosophical problems relate to claims about the way the world is, while under instrumentalism science has nothing to say about the way the world actually is beyond providing useful models. Thus, if scientific instrumentalism is correct it seems that philosophical problems cannot be scientific questions. The only way to reconcile these views would be to assert that all philosophical questions relating to how the world actually is, are in fact ‘meaningless’. Yet this would entail that even questions like ‘is slavery morally wrong?’ or ‘does God exist?’ or ‘what is knowledge?’ are actually meaningless. Even if one is dubious about whether philosophy has provided useful answers to such questions, it is quite something else to assert that the questions themselves are meaningless. To us this is clearly absurd – such questions may be subtle and multifaceted, but are not ‘meaningless’. As such it seems that Bakker is caught in a bind – either he must embrace scientific realism and thereby abandon his purely pragmatic conception of science as ‘what works’, or else he must instead embrace scientific instrumentalism and thereby (given his other views) hold that all philosophical questions are meaningless.

The other aspect of James’ argument about the status of science that Bakker ignores is the fact that appealing to ‘what works’ is a far too amorphous and generous criterion to grant science the superior status Bakker wants for it. This is because many other fields of inquiry and human endeavour also ‘work’. For example, one goal shared by many people and societies throughout history is to understand their purpose in living and find meaning in life. For the large majority of such people, belief in a supernatural being or spiritual agencies beyond this material world has ‘worked’ to provide them with answers that they find compelling and meaningful. We could even point to a variety of psychological studies indicating that such spiritual beliefs and practises actually do lead to better outcomes along a range of metrics of interest, such as life satisfaction, physical and mental health. Yet we would not wish to thereby grant supernatural belief the status of being a science, no matter how well it has ‘worked’ for many people over human history. Perhaps, however, we are not to understand what ‘works’ in this case as referring to achieving social or personal goals (though Bakker does use the term this way in his discussion of morality), but rather as to being uniquely able to generate ‘reliable, reproducible, consensual, evidence-backed, applicable knowledge’. In this case, however, the criterion still clearly fails, since (as Bakker himself seems to acknowledge), history, social science, detective work, jurisprudence, and other fields can also deliver this sort of knowledge. So it remains unclear what exactly is supposed to place science in the uniquely privileged position that Bakker attempts to carve out for it.

Philosophy and knowledge

One of Bakker’s primary concerns in his article seems to be in arguing that ‘philosophy… as a truth-seeker… has been a dismal failure’. The only reason he gives for believing this, however, is that ‘in 3000 years it has confirmed for us not one answer to any of the Big Questions’. We interpret this to mean that philosophers have not been able to agree upon an answer to any of the Big Questions. This, however, seems to be a completely misplaced criterion. To say that philosophers have not yet agreed upon a final answer to any of the ‘Big Questions’ is simply to say that philosophy is not yet complete. This is hardly unusual in academia – theoretical physicists also admit that their work is incomplete. It does not follow that philosophers have not produced any useful knowledge or insights pertinent to the ‘Big Questions’. Bakker seems to think that philosophical knowledge is all or none – either a question has an established, agreed upon answer, or it does not. Philosophy, however, attempts (among other things) to explore and articulate key concepts that underpin human thought, such as ‘causation’, ‘time’, ‘space’, ‘mind’, ‘rationality’, ‘knowledge’, ‘good’, and ‘meaning’. This process of conceptual exploration and refinement is not all or none, but a gradual accumulation of new arguments, models, comparisons, and analytical frameworks, of that sort that can be found in any introductory philosophical textbook or handbook.

Another aspect that Bakker overlooks is that once a widely agreed upon answer or framework for thinking about one particular question is arrived upon, the field ceases to be regarded as philosophy and becomes an established science.  As we mentioned earlier, modern physics was originally called ‘natural philosophy’, and most of the other fields of natural and social science likewise branched off from philosophy at various times. This was not simply because researchers decided to use ‘the scientific method’, but was in part the result of conceptual refinements and theoretical developments (as well as technological advances) that allowed the discipline to reach maturity as a science. We note that much of the subject matter of philosophy of mind is currently in the process of being transformed into the purview of the emerging field of cognitive science. Thus, the only way Bakker can argue that philosophy has been ‘a dismal failure’ as a truth seeker is first, by ignoring all of the important historical contributions that philosophers and philosophical reasoning has made in providing the foundation for modern scientific disciplines, and secondly by imposing an implausibly rigid and simplistic criterion for what philosophical knowledge should look like.

Finally, Bakker ignores the many demonstrable contributions that philosophy has made to increasing human knowledge and wellbeing, of which we will now give a few examples. Our first example is that of Galileo, who drew his conclusions about falling objects using logic and reason rather than experience or observation. On page 58 Bakker draws a distinction between reason and logic, yet he seems unaware that reason is the application of logic, which is a sub-field of philosophy rather than science.  How on Earth could Galileo have experienced objects falling in a vacuum? Our second example is that of the democratic principles and safeguards embodied in the United States Constitution, which were significantly influenced by political philosophers such as William Blackstone, John Locke, and Montesquieu. Science had nothing to do with it. Our third example is the work of a number of philosophers, logicians, and mathematicians such as Gottlob Frege, Bertrand Russell, Kurt Gödel, and Alan Turing, who developed the foundations of logic and computer science that underpinned the development of modern digital computers. Our final example is the development of the ethical principles of informed patient consent, which were developed by judges and bioethicists. Prior to this, there were some notorious cases in the first half of the twentieth century where informed patient consent had not been obtained for certain clinical trials. We argue that informed patient consent is primarily obtained for legal or ethical reasons, and not for purely scientific purposes. We could supply further examples of the practical usefulness of philosophy, but space in this magazine is understandably limited.

Concluding remarks

Bakker’s article exemplifies the pitfalls of crude positivism and the folly of scientism. There seems to be an inverse correlation in such writings between the disdainful dismissal of non-scientific disciplines like philosophy and the level of understanding of what philosophy actually is. In particular, the fundamental flaw of Bakker’s argument is that, in arguing for the unique superiority of science and the uselessness of philosophy as a field of inquiry, Bakker is himself doing philosophy. Because of his rejection of the value of philosophy and refusal to engage with relevant philosophical literature, however, he also does it very badly. Philosophy addresses many of the most fundamental questions that underpin all aspects of human endeavour, including law, politics, ethics – and even science. It is therefore not something we can simply avoid doing or pretend doesn’t exist. It can often be difficult and even frustrating when agreement and final resolution is often so hard to achieve. Nevertheless, we believe that as intellectually responsible skeptics it is vital to take philosophical issues seriously, and reject the easy but misguided notion of ‘crude positivism’ that science is the only form of human inquiry worth taking seriously.

1 Comment

Filed under Essays and talks, Puzzles

How to startup and run a local skeptics group

by Tim Harding

(An edited version of this article was published in The Skeptic magazine,
December 2017, Vol 37 No 4)

This article about local skeptics groups is intended to complement those elsewhere in this issue of the magazine by Eran Segev and Tim Mendham.  After having been a co-organiser for nearly seven years of the successful Mordi Skeptics in the Pub, I would now like to pass on 10 tips for people thinking of starting up a local skeptics group in their area.

As Tim Mendham writes, the Skeptics in the Pub (SitP) movement has been quite successful with over 100 groups worldwide, including 11 (now 17) in Australia.  The Mordi SitP now has over 700 members, although probably only around 10% of these come to the meetups (the rest are social media members).  I think the key to this success is the idea of meeting in a social setting over a few drinks and the possibility of dinner as well. This can help overcome the usual objections to boring meetings that we get enough of in our day jobs.

Tim Harding introducing visiting US speaker Susan Gerbic
to the Mordi Skeptics, 2015

  1. Choose your venue

The obvious first requirement is the availability of reasonably priced drinks and meals.  Next is adequate parking and close proximity to public transport.  Although most groups start off meeting in a public lounge area, it’s best to choose a venue that has private rooms, if and when you want to have guest speakers later on.

  1. Choose a local name for your group

There are three main advantages to having an identifiable local geographical name for you group, such as your local town or suburb. First, it helps potential members know where you are.  Second, the venue you choose is likely to be impressed by your local name. Third, local MPs are more likely to take you seriously if you want to lobby them about some skeptical issue.  (Don’t name your group after the venue, because you might need to change venues and keep your name).

  1. Promote your group via social media

Once you have your venue and group name, the next step is to announce your existence via social media. At Mordi Skeptics, we found the Meetup web site (www.meetup.com) very useful, both in attracting members and in operating the group. The Meetup web site enables you to effectively operate the group online, without any need for those tedious organisers’ meetings that put busy people off.  But Meetup.com costs money to use, for which collecting a couple of dollars from each meetup attendee should be adequate.  Establishing a Facebook page and a Twitter handle is also a good idea, and doesn’t cost anything.

  1. Select a small number of organisers

SitP groups are best run informally, with a small number of organisers – at least 3 and no more than 5.  Organisers should be selected by invitation rather than elected at a meetup. Elections require constitutions and other time-consuming formalities you will want to avoid.  There should be no need for an informal local group to incorporate, which would require an AGM and lots of tedious paperwork. Also, you are bound to find a few anti-skeptics in the audience, who would relish an opportunity to put their hand up and sabotage your group.

Work out what tasks are needed to run the group effectively, divide these tasks up between the organisers and let them get on with it.  Such tasks include speaker wrangling, social media webmastering and liaising with the venue management.

  1. Develop a good relationship with the venue management

One of the most important tasks is to develop a good working relationship with venue staff member responsible for table and room bookings.  This task should be allocated to one of your organisers who should initially meet this staff member personally (rather than just talk over the phone) and get familiar on first name terms.  Explain what the group is on about, and how you would like to co-operate with the venue to your mutual benefit. Ask them how many members on average would need to dine and attend meetups for the venue to allocate you a private room for free on a regular basis.  (The cost of hiring a room will probably be prohibitive). The minimum number will probably be in the order of 10 for dining and 20 for attending the meeting (at which members are expected to buy a drink – even just tea or coffee).

If the venue is not prepared to give you a private room for free, you might need to look elsewhere if you want have guest speakers. Let the venue know how many members you are expecting at each meetup and the timings (including a mid-presentation drinks break) to assist them in room allocation and staffing plans.

  1. Consider your audience

We found that there were three categories of people who attended our SitP meetups.  First, there are the committed skeptics who want to advance the cause. These are likely to be in a minority, at least initially.  Second, there are people who would like to have interesting discussions in a social setting. Third, there are people who would like to meet like-minded people with a view to possible friendship or even ‘romance’ (these people are usually more interested in the dinners than the presentations).  You need to try and cater for all three categories of people, although the skeptical cause must remain paramount.  After all, you don’t want your SitP group to be just a lonely hearts club.

  1. Have clear aims and stick to them

The organisers should develop a simple and clear set of aims or purpose, and publicise these via social media.  Allow comments on these aims via social media, but don’t allow them to be debated at the actual meetups.  Otherwise you run the risk of your group being derailed by anti-skeptics.

One thing I would recommend is keeping scientific skepticism as your central focus.  For example, you will find that some people confuse skepticism with atheism, with denialism or even conspiracy theories.  In particular, just as not all skeptics are atheists, even less atheists are skeptics. Both groups are more likely to flourish by being kept separate. It’s not as if people can’t join more than one type of group.

On the other hand, don’t let your scope get too narrow. The traditional skeptical topics of paranormality, quackery and pseudoscience can be become a bit boring after a while. Any topic that promotes rationality and demotes irrationality is possibly suitable.  We usually found that talks about real science were the most popular.

  1. Select speakers carefully

Obviously you need to select speakers consistent with your aims or purposes.  Ask for recommendations from other skeptics groups.  Always have one or two backup speakers in case your scheduled speaker becomes ill on the day, or has some other unforeseen and unavoidable reason for not being able to speak.  Your own members would be the most reliable source of such backup speakers.

  1. Network with other skeptics groups

There are obvious mutual advantages in networking with the big skeptics groups such as Australian Skeptics Inc. based in NSW and the Australian Skeptics (Victorian Branch).  These state-based groups have significant resources, experience and expertise.  Amongst other things, it is worthwhile applying to them for not only guest speakers but possible grants or loans for such vital resources as a video projector.  You should also network with other local skeptics groups in your state, by attending their meetups and inviting their members as guest speakers.

  1. Have fun

Above all, SitP meetups should be enjoyable – they should aim to provide leisure-time pleasure rather than be some sort of obligatory burden.  If the latter, people will eventually become tired or bored and stop coming.  In my view, the growth of local skeptics groups is the future key to expanding the worldwide skeptical movement.

Tim Harding is a former co-organiser of the Mordi Skeptics in the Pub group, in a southern suburb of Melbourne.

References

Mendham, Tim, ‘Pint-Sized Fun’, The Skeptic, December 2017, Vol 37 No 4. pp.22-23.

Segev, Eran, ‘Group Thinking’ The Skeptic, December 2017, Vol 37 No 4. pp.26-29.

Leave a comment

Filed under Essays and talks

Why did slavery decline earlier in the North than in the South of the United States?

by Tim Harding

There were enormous differences in the timing of slavery abolition in the North of the United States compared to the South.  The gradual state by state emancipation of slaves began in the North soon after the Declaration of Independence in 1776.  Yet there was no legislated emancipation of any slaves in the South until 1865, after a bloody and destructive Civil War against the North.  Why was this so?  Was it simply due to geographic differences in the levels of racial prejudice against black Africans?  Or were there, as I intend to examine in this essay, more complex and relevant cultural, political, economic or religious differences between the North and the South?

In terms of the abolition of slavery, the dividing line between the North and the South was the Mason – Dixon Line, which separated free Pennsylvania from slave Maryland, Delaware, and what is now West Virginia.  This essay focusses on the internal North/South differences during the emancipation of slaves, rather than the abolition of the importation of slaves to America via the Atlantic slave trade, which affected both the North and the South.

During the British colonial period, African slaves were imported and distributed to all colonies to replace the dwindling supply of white indentured labour,[1] who were not arriving in sufficient quantities to replace those who had served their limited term.[2]  On the plantations, escape was easy for the white indentured labourer who could blend into the free population, but less easy for black Africans.[3]  In the North, slaves typically worked as house servants and labourers, including on farms and on maritime docks in loading and unloading ships.  Some slaves worked in various skilled trades, such as bakers, carpenters, blacksmiths and so on.[4]

Williams argues that the decisive factor was that the African slave was cheaper than indentured white servants.  The money that procured a white servant for ten years could buy an African slave for life.  He concludes that the primary reason for the use of African slaves was economic rather than racial; and that racial prejudice was a later rationalisation to justify economic facts.  Sugar, tobacco and cotton required large plantations and hordes of cheap labour.[5]  In America, these commodity crops were all grown in the South, as a result of climatic and topological differences from the North, where there were no such large plantations.

The first American plantation commodity crop was tobacco in the Chesapeake region, dating from the early 17th century.  Production of food crops – primarily maize – was usually limited to the requirements of self-sufficiency.  The switch from indentured labour to slave labour did raise productivity on the majority of plantations, just as the slave buyers had hoped.  Most of these efficient workers were Africans rather than Creoles, with slave women performing the same work as slave men.[6]

The American Revolution brought severe economic depression and social disruption to the Chesapeake region.  There were shortages of salt, medicine, shoes and cloth and slaves naturally suffered more than slave-owners.  Some slaves got to travel with their owners, where they learned what the Revolution was about, stimulating slave demands for consequential freedoms.[7]

Slaves picking cotton

Later on, after the invention of the cotton gin in 1793, the rapid expansion of the cotton industry in the South and Deep South reshaped American slave life.  The slave population in Alabama and Mississippi grew sixfold, mainly as a result of a substantial relocation of 700,000 slaves from the Southern border states.[8]   Strong world demand for cotton kept prices generally high, enabling the purchase and relocation of slaves from higher latitudes.[9]  Slaves were even bought from Northern slave-owners in anticipation of the abolition of slavery in those states;[10] although there was a market preference for experienced Southern slaves who were more efficient in the ‘sleight of picking cotton’.[11]  This evidence helps to show that slavery was clearly the cheapest and most productive source of labour in the South.[12]

African slaves were also of economic importance to the North.  Whilst there were no large agricultural plantations like those in the South, slaves performed menial tasks that whites would otherwise have had to do.  For this very reason, the Senate Foreign Relations Committee in 1828 objected to the colonisation of American slaves in Africa (which had been proposed as a solution to the perceived social problems that would arise from abolition).  The Senate Committee argued that colonisation would create a labor vacuum in the Eastern seaboard cities, increase the price of labor, and attract rural Africans and fugitive slaves to the urban centres.[13]

After the American War of Independence, there were several petitions by slaves to state legislatures begging for the abolition of slavery.[14][15]  These petitions largely fell on deaf ears at the time, although Vermont and Pennsylvania had already passed Acts for the gradual abolition of slavery in 1777 and 1780 respectively.[16]  All the other Northern states followed over the next decade or so, except for New Jersey in 1804.[17]  This gradualist approach is illustrated by the words of the key ‘Founding Father’ George Washington who in 1786 wrote:

‘I never mean (unless some particular circumstances should compel me to it) to possess another slave by purchase; it being among my first wishes to see some plan adopted by, which slavery in this Country may be abolished by slow, sure, & imperceptible degrees.’[18]

George Washington, as a farmer with his slaves

The gradual abolition of slavery in the Northern states at first freed children born to slave mothers, but required them to serve lengthy indentures to their mother’s masters.  As a result of this gradualist approach, New York did not fully free its last ex-slaves until 1827, Rhode Island in 1840, Pennsylvania in 1847, Connecticut in 1848, and New Hampshire and New Jersey in 1865.[19]

In stark contrast, none of the Southern states abolished slavery until after the American Civil War, when in 1865 the Thirteenth Amendment to the US Constitution abolished slavery in all states, except as punishment for a crime.

The original Constitution of the United States included several provisions regarding slavery.  Section 9 of Article I forbade the Federal government from completely banning the importation of slaves before January 1, 1808; although some states individually passed laws against importing slaves.  Section 2 of Article IV prohibited states from freeing slaves who fled to them from another state, and required the return of chattel property to owners.[20]

In 1789, the Fifth Amendment to the Constitution, amongst other things, stated that no person (that is, a free citizen) shall be ‘deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use, without just compensation’.  Because slaves were property at this time, the Fifth Amendment was interpreted to mean that slavery could not be abolished without just compensation.[21]  The need to avoid expensive compensation payments may well have been the main reason why Northern governments opted for gradual abolition on an intergenerational basis.

Whilst racism was an obvious factor in the Atlantic slave trade,[22] there appears to be little evidence that racist attitudes towards Africans were more prevalent in the South than the North prior to the Civil War.  For instance, the minstrel shows in which white performers in black face disparaged and ridiculed Africans originated in New York.[23]  Most of these white minstrel performers were either born in New York or had lived in the city for long periods of their lives.[24]  Minstrel skits exaggerated racially distinctive features and behaviours to grotesque proportions.[25]  Together with newspaper cartoons and posters, Northerners were constantly reminding Africans of their alleged inferiority.  These racial stereotypes ‘hardly induced Northerners to accord this clownish race equal political and social rights’.[26]

Northern white workers protested often and bitterly against unfair competition from the far cheaper African slaves.  Founding Father John Adams observed that had slavery not been abolished in the North, the white labourers would have removed the African slaves by force.  In any case, white worker hostility towards the slaves had already rendered slavery unprofitable by lowering slave motivation and productivity.[27]

An alternative view is that the views of white workers held back emancipation in the North, through fears of labor competition from freed slaves.  White trade unions reinforced antipathy towards African labor competition.  They rejected racial unity as a way of achieving higher wages and vigorously opposed abolitionism.  To the trade unions, emancipation posed a serious threat of thousands of former slaves pouring into the North to undermine wages and working conditions.[28]  Although there were obviously conflicting views of white workers towards African slaves, they underscore the central importance of economics to the debate.

On the other hand, Litwick suggests the need to balance economic arguments for the North/South differences with consideration of cultural and ideological influences.  He suggests that political leadership was a factor in drawing attention to inconsistencies between the Enlightenment principles used to justify the American Revolution and the continuation of slavery.[29]  For instance, the Northern Founding Father John Jay (later Chief Justice of the Supreme Court) wrote:

‘To contend for liberty and to deny that blessing to others involves an inconsistency not to be excused.  Until America ridded herself of human bondage her prayers to Heaven for liberty will be impious’.[30]

John Jay (1745 – 1829)

Another cultural difference was that the leading antislavery religious movement, the Quakers, were much more active in the North than in the South.  Abolitionist sentiment in Pennsylvania, for example, resulted largely from early and persistent Quaker opposition to slavery as inconsistent with ‘the true spirit of Christianity’.  Following the lead of Pennsylvania, annual Quaker meetings in other Northern colonies adopted similar condemnations of slaveholding.[31]

In 1785, the New York Manumission Society, with John Jay as its president was also politically influential.  In 1799, John Jay as Governor of New York signed a bill providing for the gradual emancipation of New York State’s 21,000 slaves.[32]

As a counterbalance to these explanations, Davis argues that the Southern states also had antislavery political leadership from the likes of Washington, Jefferson, Madison, St. George Tucker, Patrick Henry, Arthur Lee and John Laurens.[33]  Yet there were no legislated moves towards the emancipation of Southern slaves, gradual or otherwise.  For example, the Virginian judge St. George Tucker despairingly wrote in 1795:

‘If, in Massachusetts, where the numbers are comparatively very small, this prejudice be discernable, how much stronger may it be imagined in this country, where every white man felt himself born to tyrannize, where the blacks were regarded as of no more importance than the brute cattle, where the laws rendered even venial offences criminal in them, where every species of degradation towards them was exercised on all occasions, and where even their lives were exposed to the ferocity of their masters;’[34]

This is not to say that there were no slavery abolition movements in the South – there were many, but they had little influence compared to those in the North.  According to Davis, in 1827 the number of antislavery organisations in the South outnumbered those in the North by at least four to one.[35]  Since 1794, Southern and Northern antislavery societies had met periodically as the ‘American Convention of Delegates from Abolition Societies’.  The number of states represented varied from year to year, but the consistent presence of the Pennsylvania and New York societies gave them a dominant voice.[36]

Slaves celebrating emancipation

The abolitionist sentiments in the South were vastly outweighed by the huge economic incentives to grow commodity crops such as tobacco and cotton at minimal labor costs.  Whilst there was some difference in the antislavery religious leadership of the Quakers in the North, there were insufficient differences in political leadership to account for the eighty year delay in Southern slave emancipation.  Nor is there sufficient evidence of differences in racial prejudice between the North and the South, at least until after the Civil War.

So we are left with a conclusion that the dominant differences between the North and South in terms of slavery abolition were economic ones.  On large Southern plantations where labor costs were crucial, African slaves were much cheaper than indentured white servants, not to mention free white workers.  There were no such plantations in the North, where white workers resented and agitated against unfair competition from African slave labour.  The emancipation of Northern slaves was most likely done gradually on an intergenerational basis to avoid governments having to pay compensation for the loss of slaves as property.

Bibliography

Primary sources

Constitution of the United States, 1787. (http://www.archives.gov/exhibits/charters/constitution_transcript.html) Viewed 14 September 2016.

‘Pennsylvania – An Act for the Gradual Abolition of Slavery, 1780,’at Lillian Goldman Law Library (http://avalon.law.yale.edu/18th_century/pennst01.asp) Viewed 14 September 2016.

Petition of 1778 by slaves of New Haven for the abolition of slavery in Connecticut (http://www.hartford-hwp.com/archives/45a/023.html)

St. George Tucker to Jeremy Belknap, Letter dated June 29, 1795.  Virginia Foundation for the Humanities (http://www.encyclopediavirginia.org/Letter_from_St_George_Tucker_to_Jeremy_Belknap_June_29_1795) Viewed 15 September 2016.

George Washington to John Francis Mercer. Letter dated 9 September 1786.  The Gilder Lehrman Institute of American History. (http://www.gilderlehrman.org/collections/af0e9ed4-60d0-474e-8c7d-860434909242) Viewed 14 September 2016.

Secondary sources

Ira Berlin and Leslie M. Harris (eds), Slavery in New York (New York, 2005).

David Brion Davis, The Problem of Slavery in the Age of Revolution 1770-1823, (Cornell University Press, London, 1975).

Stanley M. Elkins, Slavery – A problem in American Institutional and Intellectual Life 2nd edition, (University of Chicago Press, Chicago, 1968).

Winthrop D. Jordan, ‘The Simultaneous Invention of Slavery and Racism’ in David Garrioch ATS2110 Slavery: A History, Unit Reader (Monash University, Clayton, 2016), 61-63.

Leon F. Litwack, North of Slavery – the Negro in the Free States, 1790-1860 (University of Chicago Press, Chicago, 1961).

Randall M. Miller and John David Smith. ‘Gradual abolition’. Dictionary of Afro-American Slavery. (Greenwood Publishing Group, 1997). p. 471.

Steven F. Miller ‘Plantation Labor Organisation and Slave Life on the Cotton Frontier: The Alabama – Mississippi Black Belt, 1815-1840’ in Cultivation and Culture – Labor and the Shaping of Slave Life in Americas. Ed. Ira Berlin and Philip D. Morgan (University Press of Virginia, Charlottesville, 1993).

Junius P. Rodriguez, ed. Encyclopedia of Emancipation and Abolition in the Transatlantic World . (Routledge. Armank, 2015) p. xxxiv.

William L. Van Deburg, Slavery & Race in American Popular Culture, (University of Wisconsin Press, Madison, 1984).

Lorena S. Walsh ‘Slave Life, Slave Society and Tobacco Production in the Tidewater Chesapeake, 1620-1820’ in Cultivation and Culture – Labor and the Shaping of Slave Life in Americas. Ed. Ira Berlin and Philip D. Morgan (University Press of Virginia, Charlottesville, 1993).

Shane White, ‘The Death of James Johnson.’ American Quarterly 51, no. 4 (1999): 753-95. (http://www.jstor.org.ezproxy.lib.monash.edu.au/stable/30041672).  Viewed 14 September 2016.

Eric Williams, Capitalism and Slavery (University Of North Carolina Press, Chapel Hill 1944) in David Garrioch ATS2110 Slavery: A History Unit Reader (Monash University, Clayton, 2016), pp. 56-60.

Endnotes:

[1] Lorena S. Walsh ‘Slave Life, Slave Society and Tobacco Production in the Tidewater Chesapeake, 1620-1820’ in Cultivation and Culture – Labor and the Shaping of Slave Life in Americas. Ed. Ira Berlin and Philip D. Morgan (University Press of Virginia, Charlottesville, 1993), p.170.

[2] Eric Williams, Capitalism and Slavery (University Of North Carolina Press, Chapel Hill 1944), p.57.

[3] Eric Williams, p.57.

[4] Leon F. Litwack, North of Slavery – the Negro in the Free States, 1790-1860 (University of Chicago Press, Chicago, 1961), p.4.

[5] Eric Williams, p.57.

[6] Lorena S. Walsh, pp.170-177.

[7] Lorena S. Walsh, pp.187-189.

[8] Stanley M. Elkins, Slavery – A problem in American Institutional and Intellectual Life 2nd edition, (University of Chicago Press, Chicago, 1968)’, p.236.

[9] Steven F. Miller ‘Plantation Labor Organisation and Slave Life on the Cotton Frontier: The Alabama – Mississippi Black Belt, 1815-1840’ in Cultivation and Culture – Labor and the Shaping of Slave Life in Americas. Ed. Ira Berlin and Philip D. Morgan (University Press of Virginia, Charlottesville, 1993)’’ p.155-156.

[10] Ira Berlin and Leslie M. Harris (eds), Slavery in New York (New York, 2005), p.16.

[11] Steven F. Miller, p.165.

[12] Leon F. Litwack, p.14.

[13] Leon F. Litwack, p.156.

[14] David Brion Davis, The Problem of Slavery in the Age of Revolution 1770-1823, (Cornell University Press, London, 1975), p.76.

[15] Petition of 1778 by slaves of New Haven.

[16] Pennsylvania – An Act for the Gradual Abolition of Slavery, 1780.

[17] Junius P. Rodriguez, ed. Encyclopedia of Emancipation and Abolition in the Transatlantic World . (Routledge. Armank, 2015) p. xxxiv.

[18] George Washington to John Francis Mercer, 1786.

[19] Randall M. Miller and John David Smith. ‘Gradual abolition’. Dictionary of Afro-American Slavery. (Greenwood Publishing Group, 1997). p. 471.

[20] Constitution of the United States, 1787.

[21] Ira Berlin and Leslie M. Harris (eds), Slavery in New York (New York, 2005), p.117.

[22] Winthrop D. Jordan, ‘The Simultaneous Invention of Slavery and Racism’.

[23] William L. Van Deburg, Slavery & Race in American Popular Culture, (University of Wisconsin Press, Madison, 1984), pp.39-49.

[24] Shane White, ‘The Death of James Johnson.’ American Quarterly 51, no. 4 (1999): 753-95.

[25] Van Deburg, p.42.

[26] Leon F. Litwack, p.99.

[27] Leon F. Litwack, p.6.

[28] Leon F. Litwack, pp.159-160.

[29] Leon F. Litwack, p.6.

[30] Leon F. Litwack, p.7.

[31] Leon F. Litwack, p.14.

[32] Leon F. Litwack, p.14.

[33] David Brion Davis, The Problem of Slavery in the Age of Revolution 1770-1823, (Cornell University Press, London, 1975).

[34] St. George Tucker to Jeremy Belknap, Letter dated June 29, 1795.

[35] David Brion Davis, The Problem of Slavery in the Age of Revolution 1770-1823, (Cornell University Press, London, 1975), p.165.

[36] Leon F. Litwack, p.18.

3 Comments

Filed under Essays and talks

Consequentialism versus Justice

by Tim Harding BSc BA

There are several objections to consequentialism as a basis for morality.  Some of these objections are of considerable scholarly interest to philosophers; but I think the most powerful objection is that adherence to consequentialism can in some cases result in unacceptable injustice.  My thesis is that justice is an important factor that needs to be taken into account in ethical theories.  I also intend to argue that the best response to this objection, which is to attempt to treat justice as an intrinsically valuable consequence of actions, is currently unworkable.

Consequentialism is traditionally a set of ethical theories where the morality of an act should be judged solely by its consequences.  An act is required just because it produces the best overall results (Shafer-Landau 2012: 119).  Two of the key words here, in my view, are ‘solely’ and ‘overall’.  Consequentialism solely takes into account the overall effects of an act on the population as a whole.  Specific effects on justice or the rights of individuals or minorities are not taken into account.

The most prominent version of consequentialism is act utilitarianism, where well-being is the only thing that is intrinsically valuable (Shafer-Landau 2012: 120).  The principle of utility states that ‘an action is morally required just because it does more to improve overall well-being than any other action you could have done in the circumstances’ (Shafer-Landau 2012: 120).  Whilst utilitarianism is not the only form of consequentialism, for my current purposes I will regard an objection to utilitarianism as an objection to consequentialism.

In its broadest sense, justice may be defined as fairness: a proper balance between competing claims or interests (Rawls 1971: 10-11).  Russ Shafer-Landau (2012: 145) says that to do justice is to respect rights, which is arguably similar in meaning to properly balancing competing claims or interests.

In stark contrast to act utilitarianism, John Rawls has described justice as ‘the first virtue of social institutions’ (Rawls 1971: 3).  He argues that:

Each person possesses an inviolability founded on justice that even the welfare of society as a whole cannot override.  For this reason justice denies that the loss of freedom for some is made right by a greater good shared by others.  It does not allow that the sacrifices imposed on a few are outweighed by the larger sum of advantages enjoyed by many (Rawls 1971: 3-4).

Indeed, according to Rawls (1971: 4) justice is uncompromising: an injustice is tolerable only when it necessary to avoid an even greater injustice.

The conflict between consequentialism and justice can be illustrated by some thought experiments, starting with the well-known trolley problem, the modern version of which was first described by Philippa Foot (1967: 8) as follows:

Suppose that a judge or magistrate is faced with rioters demanding that a culprit be found for a certain crime and threatening otherwise to take their own bloody revenge on a particular section of the community. The real culprit being unknown, the judge sees himself as able to prevent the bloodshed only by framing some innocent person and having him executed. Beside this example is placed another in which a pilot whose airplane is about to crash is deciding whether to steer from a more to a less inhabited area. To make the parallel as close as possible it may rather be supposed that he is the driver of a runaway tram which he can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed. In the case of the riots the mob have five hostages, so that in both examples the exchange is supposed to be one man’s life for the lives of five.

Foot (1967: 8) reasonably asks why we should say that the driver should steer for the less occupied track, while most of us would be appalled at the idea that an innocent man should be framed and executed.  Yet in both cases, the act is done for utilitarian reasons, to maximise overall well-being.  The interests or rights of the individual who is to be killed are rated no higher than anybody else in this scenario.  The dead individual is counted merely as a pro-rata contribution to the overall aggregated well-being.  Five lives are worth more than one life, regardless of the circumstances.  The end justifies the means.

This problem has been adapted and analysed in some detail by Judith Jarvis Thomson (1985: 1395-1415).  Thomson argues that whilst most people would say that is morally permissible to steer the trolley tram away from the five men on one track towards one man on the other track, they would not regard the killing of one person to save five as permissible in other cases with similar consequences.  For instance, Thomson (1985: 1396) asks us to consider another case that she calls the ‘transplant case’:

This time you are to imagine yourself to be a surgeon, a truly great surgeon. Among other things you do, you transplant organs, and you are such a great surgeon that the organs you transplant always take. At the moment you have five patients who need organs. Two need one lung each, two need a kidney each, and the fifth needs a heart. If they do not get those organs today, they will all die; if you find organs for them today, you can transplant the organs and they will all live. But where to find the lungs, the kidneys, and the heart? The time is almost up when a report is brought to you that a young man who has just come into your clinic for his yearly check-up has exactly the right blood-type, and is in excellent health. Lo, you have a possible donor. All you need do is cut him up and distribute his parts among the five who need them. You ask, but he says, “Sorry. I deeply sympathize, but no.” Would it be morally permissible for you to operate anyway?

Thomson (1985: 1396) asks why is it that the trolley driver may turn his trolley and kill a man on the other tram track, though the surgeon may not kill the young man and remove his organs?  In both cases, one will die if the agent acts, but five will live who would otherwise die – a net saving of four lives.  As the consequences are the same in each case, utilitarianism would allow both acts to take place.  The difference in moral permissibility in these two cases with similar outcomes indicates a serious problem with utilitarianism as an ethical theory.

Russ Shafer-Landau (2012: 144-146) identifies this problem as injustice, meaning the violation of rights, such as the right of the healthy young man in the transplant case to not be murdered for his organs.  On the other hand, I would argue that turning the trolley is not an injustice because unlike the healthy young man in transplant, the tram track workers on both tracks have implicitly consented to take the normally small risk of being hit by a runaway trolley. (There is also a difference between these two cases in respect for autonomy – the implied consent to risk in the case of the trolley track workers versus the explicit refusal of consent in the transplant case. However, that is an issue for another essay).

Shafer-Landau (2012: 144) in fact argues that injustice is perhaps the greatest problem for utilitarianism.  He says that ‘moral theories should not permit, much less require, that we act unjustly.  Therefore, there is something deeply wrong about utilitarianism’ (Shafer-Landau 2012: 145).

Shafer-Landau (2012: 145) strengthens the case against utilitarianism with some real historical examples rather than just thought experiments.  He cites wartime cases of vicarious punishment where innocent people are deliberately targeted as a way to deter the guilty; and exemplary punishment where random prisoners are shot to deter resistance or escapes.  Such punishments are now treated as violations of human rights and war crimes; but in earlier wars such punishments could have been justified according to utilitarianism.

Shafer-Landau (2012: 146-148) goes on to identify some potential solutions to the problem of injustice.  To assist in analysing and evaluating these potential solutions, Shafer-Landau formally states his Argument from Injustice as follows:

  1. The correct moral theory will never require us to commit serious injustices.
  2. Utilitarianism sometimes requires us to commit serious injustices.
  3. Therefore utilitarianism is not the correct moral theory.

One of his potential solutions is to say that justice must sometimes be sacrificed for sake of overall well-being.  I do not think that this would solve the problem at all.  Unacceptable injustices would still occur, and I support Rawls abovementioned view that justice is uncompromising – an injustice is tolerable only when it necessary to avoid an even greater injustice.  Another potential solution is to deny Premise 2 above, that is to deny that utilitarianism requires us to commit injustice.  Under this solution, adjustments will naturally be made to scenarios to ensure that maximising overall well-being produces a just outcome.  Shafer-Landau (2012: 148) regards this solution as overly optimistic, and I agree.

I think the best of these potential solutions is to attempt to build justice into the calculation of intrinsic value, alongside overall well-being.  In this way, the consequences of an action should try to maximise justice in addition to well-being.  Shafer-Landau (2012: 147) argues that sometimes a very minor injustice can be justifiably traded off in favour of an overwhelming increase in well-being.  However, giving roughly equal weight to both well-being and justice is problematic due to the lack of any principles for deciding between these two values where they conflict.  Also, as Rawls (1971: 4) has argued, justice is uncompromising – there can be no such thing as ‘half-justice’. For these reasons, I support Shafer-Landau’s view that this solution is currently unworkable.

In this essay, I have endeavoured to show how consequentialism can sometimes result in injustice, by reference to some notable philosophical thought experiments, as well as to some historical wartime cases.  I have cited the work of John Rawls to argue that justice is an important factor that needs to be taken into account in ethical theories.  I have considered some potential solutions to the problem of injustice, and argued against what I think is the best solution to this problem.  For these reasons, I conclude that the injustice objection to consequentialism should be upheld.

References

Foot, Philippa. ‘The Problem of Abortion and the Doctrine of the Double Effect’ Oxford Review, no. 5, 1967, 5-15.

Rawls, John. 1971. A Theory of Justice. Cambridge: Harvard University Press.

Shafer-Landau, Russ. 2012. The Fundamentals of Ethics, 2nd edition. Oxford: Oxford University Press.

Thomson, Judith Jarvis. 1985. ‘The Trolley Problem’ The Yale Law Journal, Vol. 94, No. 6 (May, 1985), pp. 1395-1415.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

3 Comments

Filed under Essays and talks

12 basic things Australians should have learned at school

Continue reading

Leave a comment

Filed under Essays and talks

Skepticism, Science and Scientism

By Tim Harding B.Sc., B.A.

(An edited version of this essay was published in The Skeptic magazine,
September 2017, Vol 37 No 3)

In these challenging times of anti-science attitudes and ‘alternative facts’, it may sound strange to be warning against excessive scientific exuberance.  Yet to help defend science from these attacks, I think we need to encourage scientists to maintain their credibility amongst non-scientists.

In my last article for The Skeptic (‘I Think I Am’, March 2017), I traced the long history of skepticism over the millennia.  I talked about the philosophical skepticism of Classical Greece, the skepticism of Modern Philosophy dating from Descartes, through to the contemporary form of scientific skepticism that our international skeptical movement now largely endorses.  I quoted Dr. Steven Novella’s definition of scientific skepticism as ‘the application of skeptical philosophy, critical thinking skills, and knowledge of science and its methods to empirical claims, while remaining agnostic or neutral to non-empirical claims (except those that directly impact the practice of science).’

Despite the recent growth of various anti-science movements, science is still widely regarded as the ‘gold standard’ for the discovery of empirical knowledge, that is, knowledge derived from observations and experiments.  Even theoretical physics is supposed to be empirically verifiable in principle when the necessary technology becomes available, as in the case of the Higgs boson and Einstein’s gravitational waves.  But empirical observations are not our only source of knowledge – we also use reasoning to make sense of our observations and to draw valid conclusions from them.  We can even generate new knowledge through the application of reasoning to what we already know, as I shall discuss later.

Most skeptics (with a ‘k’) see science as a kind of rational antidote to the irrationality of pseudoscience, quackery and other varieties of woo.  So we naturally tend to support and promote science for this purpose.  But sometimes we can go too far in our enthusiasm for science.  We can mistakenly attempt to extend the scope of science beyond its empirical capabilities, into other fields of inquiry such as philosophy and politics – even ethics.  If only a small number of celebrity scientists lessen their credibility by making pronouncements beyond their individual fields of expertise, they render themselves vulnerable to attack by our opponents who are looking for any weaknesses in their arguments.  In doing so, they can unintentionally undermine public confidence in science, and by extension, scientific skepticism.

The pitfalls of crude positivism

Logical positivism (sometimes called ‘logical empiricism’) was a Western philosophical movement in the first half of the 20th century with a central thesis of verificationism; which was a theory of knowledge which asserted that only propositions verifiable through empirical observation are meaningful.

One of the most prominent proponents of logical positivism was Professor Sir Alfred Ayer (1910-1989) pictured below.  Ayer is best known for popularising the verification principle, in particular through his presentation of it in his bestselling 1936 book Language, Truth, and Logic.  Ayer’s thesis was that a proposition can only be meaningful if it has verifiable empirical content, otherwise it is either a priori (known by deduction) or nonsensical.  Ayer’s philosophical ideas were deeply influenced by those of the Vienna Circle and the 18th century empiricist philosopher David Hume.

James Fodor, who is a young Melbourne science student, secularist and skeptic has critiqued a relatively primitive form of logical positivism, which he calls ‘crude positivism’.  He describes this as a family of related and overlapping viewpoints, rather than a single well-defined doctrine, the three most commonly-encountered components of which are the following:

(1) Strict evidentialism: the ultimate arbiter of knowledge is evidence, which should determine our beliefs in a fundamental and straightforward way; namely that we believe things if and only if there is sufficient evidence for them.

(2) Narrow scientism: the highest, or perhaps only, legitimate form of objective knowledge is that produced by the natural sciences. The social sciences, along with non-scientific pursuits, either do not produce real knowledge, or only knowledge of a distinctly inferior sort.

(3) Pragmatism: science owes its special status to its unique ability to deliver concrete, practical results: it ‘works’.  Philosophy, theology, and other such fields of inquiry do not produce ‘results’ in this same way, and thus have no special status.

Somewhat controversially, Fodor classifies Richard Dawkins, Sam Harris, Peter Boghossian, Neil de Grasse Tyson, Lawrence Krauss, and Stephen Hawking as exponents of crude positivism when they stray outside their respective fields of scientific expertise into other fields such as philosophy and social commentary.  (Although to be fair, Lawrence Krauss wrote an apology in a 2012 issue of Scientific American, for seemingly dismissing the importance of philosophy in a previous interview he gave to The Atlantic).

Fodor’s component (1) is a relatively uncontroversial viewpoint shared by most scientists and skeptics.  Nevertheless, Fodor cautions that crude positivists often speak as if evidence is self-interpreting, such that a given piece of evidence automatically picks out one singular state of affairs over all other possibilities.  In practice, however, this is almost never the case because the interpretation of evidence nearly always requires an elaborate network of background knowledge and pre-existing theory.  For instance, the raw data from most scientific observations or experiments are unintelligible without the use of background scientific theories and methodologies.

It is Fodor’s components (2) and (3) that are likely to be more controversial, and so I will now discuss them in more detail.

The folly of scientism

What is ‘scientism’ – and how is it different from the natural enthusiasm for science that most skeptics share?  Unlike logical positivism, scientism is not a serious intellectual movement.  The term is almost never used by its exponents to describe themselves.  Instead, the word scientism is mainly used pejoratively when criticising scientists for attempting to extend the boundaries of science beyond empiricism.

Warwick University philosopher Prof. Tom Sorell has defined scientism as: ‘a matter of putting too high a value on natural science in comparison with other branches of learning or culture.’  In summary, a commitment to one or more of the following statements lays one open to the charge of scientism:

  • The natural sciences are more important than the humanities for an understanding of the world in which we live, or even all we need to understand it;
  • Only a scientific methodology is intellectually acceptable. Therefore if the humanities are to be a genuine part of human knowledge they must adopt it; and
  • Philosophical problems are scientific problems and should only be dealt with as such.

At the 2016 Australian Skeptics National Convention, former President of Australian Skeptics Inc., Peter Bowditch, criticized a recent video made by TV science communicator Bill Nye in which he responded to a student asking him: ‘Is philosophy meaningless?’  In his rambling answer, Nye confused questions of consciousness and reality, opined that philosophy was irrelevant to answering such questions, and suggested that our own senses are more reliable than philosophy.  Peter Bowditch observed that ‘the problem with his [Nye’s] comments was not that they were just wrong about philosophy; they were fractally wrong.  Nye didn’t know what he was talking about. His concept of philosophy was extremely naïve.’  Bill Nye’s embarrassing blunder is perhaps ‘low hanging fruit’; and after trenchant criticism, Nye realised his error and began reading about philosophy for the first time.

Some distinguished scientists (not just philosophers) are becoming concerned about the pernicious influence of scientism.  Biological sciences professor Austin Hughes (1949-2015) wrote ‘the temptation to overreach, however, seems increasingly indulged today in discussions about science. Both in the work of professional philosophers and in popular writings by natural scientists, it is frequently claimed that natural science does or soon will constitute the entire domain of truth. And this attitude is becoming more widespread among scientists themselves. All too many of my contemporaries in science have accepted without question the hype that suggests that an advanced degree in some area of natural science confers the ability to pontificate wisely on any and all subjects.’

Prof. Hughes notes that advocates of scientism today claim the sole mantle of rationality, frequently equating science with reason itself.  Yet it seems the very antithesis of reason to insist that science can do what it cannot, or even that it has done what it demonstrably has not.  He writes ‘as a scientist, I would never deny that scientific discoveries can have important implications for metaphysics, epistemology, and ethics, and that everyone interested in these topics needs to be scientifically literate. But the claim that science and science alone can answer longstanding questions in these fields gives rise to countless problems.’

Limitations of science

The editor of the philosophical journal Think and author of The Philosophy Gym, Prof. Stephen Law has identified two kinds of questions to which it is very widely supposed that science cannot supply answers:

Firstly, philosophical questions are for the most part conceptual, rather than scientific or empirical.  They are usually answered by the use of reasoning rather than empirical observations.  For example, Galileo conducted a famous thought experiment by reason alone.  Imagine two objects, one light and one heavier than the other one, are connected to each other by a string.  Drop these linked objects from the top of a tower.  If we assume heavier objects do indeed fall faster than lighter ones (and conversely, lighter objects fall slower), the string will soon pull taut as the lighter object retards the fall of the heavier object.  But the linked objects together are heavier than the heavy object alone, and therefore should fall faster. This logical contradiction leads one to conclude the assumption about heavier objects falling faster is false.  Galileo figured this conclusion out in his head, without the assistance of any empirical experiment or observation.  In doing so, he was employing philosophical rather than scientific methods.

Secondly, moral questions are about what we ought or ought not to do.  In contrast, the empirical sciences, on their own, appear capable of establishing only what is the case.  This is known as the ‘is/ought gap’. Science can provide us with factual evidence that might influence our ethical judgements but it cannot provide us with the necessary ethical values or principles.  For example, science can tell us how to build nuclear weapons, but it cannot tell us whether or not they should ever be used and under what circumstances.  Clinical trials are conducted in medical science, often using treatment groups versus control groups of patients.  It is bioethics rather than science that provides us with the moral principles for obtaining informed patient consent for participation in such clinical trials, especially when we consider that control groups of patients are being denied treatments that could be to their benefit.

I have given the above examples not to criticise science in any way, but simply to point out that science has limitations, and that there is a place for other fields of inquiry in addition to science.

Is pragmatism enough?

Coming back to Fodor’s component (3) of crude positivism, he makes a good point that a scientific explanation that ‘works’ is not necessarily true.  For instance, Claudius Ptolemy of Alexandria (c. 90CE – c. 168CE) explained how to predict the behavior of the planets by introducing ad hoc notions of the deferent, equant and epicycles to the geocentric model of what is now known as our solar system.  This model was completely wrong, yet it produced accurate predictions of the motions of the planets – it ‘worked’.  Another example was Gregor Mendel’s 19th century genetic experiments on wrinkled peas.  These empirical experiments adequately explained the observed phenomena of genetic variation without even knowing what genes were or where they were located in living organisms.

Ptolemy model

Schematic diagram of Ptolemy’s incorrect geocentric model of the cosmos

James Fodor argues that just because scientific theories can be used to make accurate predictions, this does not necessarily mean that science alone always provides us with accurate descriptions of reality.  There is even a philosophical theory known as scientific instrumentalism, which holds that as long as a scientific theory makes accurate predictions, it does not really matter whether the theory corresponds to reality.  The psychology of perception and the philosophies of mind and metaphysics could also be relevant.  Fodor adds that many of the examples of science ‘delivering results’ are really applications of engineering and technology, rather than the discovery process of science itself.

Fodor concludes that if the key to the success of the natural sciences is adherence to rational methodologies and inferences, then it is those successful methods that we should focus on championing, whatever discipline they may be applied in, rather than the data sets collected in particular sciences.

Implications for science and skepticism

Physicist Ian Hutchison writes ‘the health of science is in fact jeopardised by scientism, not promoted by it.  At the very least, scientism provokes a defensive, immunological, aggressive response in other intellectual communities, in return for its own arrogance and intellectual bullyism.  It taints science itself by association’.  Hutchinson suggests that perhaps what the public is rejecting is not actually science itself, but a worldview that closely aligns itself with science — scientism.  By disentangling these two concepts, we have a much better chance for enlisting public support for scientific research.

The late Prof. Austin Hughes left us with a prescient warning that continued insistence on the universal and exclusive competence of science will serve only to undermine the credibility of science as a whole. The ultimate outcome will be an increase in science denialism that questions the ability of science to address even the questions legitimately within its sphere of competence.

References

Ayer, Alfred. J. (1936), Language Truth and Logic, London: Penguin.

Bowditch, Peter ‘Is Philosophy Dead?’ Australasian Science July/August 2017.

Fodor, James ‘Not so simple’, Australian Rationalist, v. 103, December 2016, pp. 32–35.

Harding, Tim ‘I Think I Am’, The Skeptic, Vol. 37 No. 1. March 2017, pp. 40-44.

Hughes, Austin L ‘The Folly of Scientism’, The New Atlantis, Number 37, Fall 2012, pp. 32-50.

Hutchinson, Ian. (2011) Monopolizing Knowledge: A Scientist Refutes Religion-Denying, Reason-Destroying Scientism. Belmont, MA: Fias Publishing.

Krauss, Lawrence ‘The Consolation of PhilosophyScientific American Mind, April 27, 2012.

Law, Stephen, ‘Scientism, the limits of science, and religionCenter for Inquiry (2016), Amherst, NY.

Novella, Steven (15 February 2013). ‘Scientific Skepticism, Rationalism, and Secularism’. Neurologica (blog). Retrieved 12 February 2017.

Sorell, Thomas (1994), Scientism: Philosophy and the Infatuation with Science, London: Routledge.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

16 Comments

Filed under Essays and talks