Tag Archives: induction

The Birth of Experimental Science

by Tim Harding

(An edited version of this essay was published in The Skeptic magazine,
June 2016, Vol 36, No. 2, under the title ‘Out of the Dark’).

To the ancient Greeks, science was simply the knowledge of nature.  The acquisition of such knowledge was theoretical rather than experimental.  Logic and reason were applied to observations of nature in attempts to discover the underlying principles influencing phenomena.

After the Dark Ages, the revival of classical logic and reason in Western Europe was highly significant to the development of universities and subsequent intellectual progress.  It was also a precursor to the development of empirical scientific methods in the thirteenth century, which I think were even more important because of the later practical benefits of science to humanity.  The two most influential thinkers in development of scientific methods at this time were the English philosophers Robert Grosseteste (1175-1253) and Roger Bacon (c.1219/20-c.1292). (Note: Roger Bacon is not to be confused with Francis Bacon).

Apart from the relatively brief Carolingan Renaissance of the late eighth century to the ninth century, intellectual progress in Western Europe generally lagged behind that of the Byzantine and Islamic parts of the former Roman Empire.[1]  But from around 1050, Arabic, Jewish and Greek intellectual manuscripts started to become more available in the West in Latin translations.[2] [3]  These translations of ancient works had a major impact on Medieval European thought.  For instance, according to Pasnau, ‘when James of Venice translated Aristotle’s Posterior Analytics from Greek into Latin in the second quarter of the twelfth century, ‘European philosophy got one of the great shocks of its long history’.[4] This book had a dramatic impact on ‘natural philosophy’, as science was then called.

Under Pope Gregory VII, a Roman synod had in 1079 decreed that all bishops institute the teaching of liberal arts in their cathedrals.[5]  In the early twelfth century, universities began to emerge from Cathedral schools, in response to the Gregorian reform and demands for literate administrators, accountants, lawyers and clerics.  The curriculum was loosely based on the seven liberal arts, consisting of a trivium of grammar, dialectic and rhetoric; plus a quadruvium of music, arithmetic, geometry and astronomy.[6]  Besides the liberal arts, some (but not all) universities offered three professional courses of law, medicine and theology.[7]

Dialectic was a method of learning by the use of arguments in a question and answer format, heavily influenced by the translations of Aristotle’s works.  This was known as ‘Scholasticism’ and included the use of logical reasoning as an alternative to the traditional appeals to authority.[8] [9]  For the first time, philosophers and scientists studied in close proximity to theologians trained to ask questions.[10]

At this stage, the most influential scientist was Robert Grosseteste (1175-1253) who was a leading English scholastic philosopher, scientist and theologian.  After studying theology in Paris from 1209 to 1214, he made his academic career at Oxford, becoming its Chancellor in 1234.[11]  He later became the Bishop of Lincoln, where there is now a university named after him. According to Luscombe, Grosseteste ‘seems to be the single most influential figure in shaping an Oxford interest in the empirical sciences that was to endure for the rest of the Middle Ages’.[12]


Robert Grossteste  (1175-1253)

Grosseteste’s knowledge of Greek enabled him to participate in the translation of Aristotelian science and ethics.[13] [14]  In the first Latin commentary on Aristotle’s Posterior Analytics, from the 1220s, Robert Grosseteste distinguishes four ways in which we might speak of scientia, or scientific knowledge.

‘It does not escape us, however, that having scientia is spoken of broadly, strictly, more strictly, and most strictly. [1] Scientia commonly so-called is [merely] comprehension of truth. Unstable contingent things are objects of scientia in this way. [2] Scientia strictly so-called is comprehension of the truth of things that are always or most of the time in one way. Natural things – namely, natural contingencies – are objects of scientia in this way. Of these things there is demonstration broadly so-called. [3] Scientia more strictly so-called is comprehension of the truth of things that are always in one way. Both the principles and the conclusions in mathematics are objects of scientia in this way. [4] Scientia most strictly so-called is comprehension of what exists immutably by means of the comprehension of that from which it has immutable being. This is by means of the comprehension of a cause that is immutable in its being and its causing.’[15]

Grosseteste’s first and second ways of describing scientia refer to the truth of the way things are by demonstration, that is by empirical observation.

Grosseteste himself went beyond Aristotelian science by investigating natural phenomena mathematically as well as empirically in controlled laboratory experiments.  He studied the refraction of light through glass lenses and drew conclusions about rainbows as the refraction of light through rain drops.[16]

Although Grosseteste is credited with introducing the idea of controlled scientific experiments, there is doubt whether he made this idea part of a general account of a scientific method for arriving at the principles of demonstrative science. [17]  This role fell to his disciple Roger Bacon (c.1219/20-c.1292CE) who was who was also an English philosopher, but unlike Bishop Grosseteste, Bacon was a Franciscan friar.

Roger Bacon (c.1219/20-c.1292)

Bacon taught in the Oxford arts faculty until about 1247, when he moved to Paris which he disliked and where he made himself somewhat unpopular.  The only Parisian academic he admired was Peter of Maricourt, who reinforced the importance of experiment in scientific research and of mathematics to certainty.[18]

As a scientist, Roger Bacon continued Grosseteste’s investigation of optics in a laboratory setting.  He supplemented these optical experiments with studies of the physiology of the human eye by dissecting the eyes of cattle and pigs.[19]  Bacon also investigated the geometry of light, thus further applying mathematics to empirical observations.  According to Colish, ‘the very idea of treating qualities quantitatively was a move away from Aristotle, who held that quality and quantity are essentially different’.[20]

The most important work of Roger Bacon was his Opus Majus (Latin for ‘Greater Work’) written c.1267CE.  Part Six of this work contains a study of Experimental Science, in which Bacon advocates the verification of scientific reasoning by experiment.

‘…I now wish to unfold the principles of experimental science, since without experience nothing can be sufficiently known. For there are two modes of acquiring knowledge, namely, by reasoning and experience. Reasoning draws a conclusion and makes us grant the conclusion, but does not make the conclusion certain, nor does it remove doubt so that the mind may rest on the intuition of truth, unless the mind discovers it by the path of experience;..’[21]

Bacon’s aim was to provide a rigorous method for empirical science, analogous to the use of logic to test the validity of deductive arguments.  This new practical method consisted of a combination of mathematics and detailed experiential descriptions of discrete phenomena in nature. [22]  Roger Bacon illustrated his method by an investigation into the nature and cause of the rainbow.  For instance, he calculated the measured value of 42 degrees for the maximum elevation of the rainbow.  This was probably done with an astrolabe, and by this technique, Bacon advocated the skillful mathematical use of instruments for an experimental science.[23]

Optics from Roger Bacon’s De multiplicatone specierum

The optical experiments that both Grosseteste and Bacon conducted were of practical usefulness in correcting deficiencies in human eyesight and the later invention of the telescope.  But more importantly, Roger Bacon is credited with being the originator of empirical scientific methods that were later further developed by scientists such as Galileo Galilei, Francis Bacon and Robert Hooke.  This is notwithstanding the twentieth century criticism of inductive scientific methods by philosophers of science such as Karl Popper, in favour of empirical falsification.[24]

The benefits of science to humanity – especially medical science – are well known and one example should suffice here.  An essential component of medical science is the clinical trial, which is the empirical testing of a proposed treatment on a group of patients whilst using another group of untreated patients as a blind control group to isolate and statistically measure the effectiveness of the treatment, whilst keeping all other factors constant.  This empirical approach is vastly superior to the theoretical approach of ancient physicians such as Hippocrates and Galen, and owes much to the pioneering work of Grosseteste and Bacon.  This is why I think that the development of empirical scientific methods was even more important than the revival of classical logic and reason, in terms of practical benefits to humanity. However, it is somewhat ironic that the later clashes between religion and science had their origins in the pioneering experiments of a bishop and a friar.

Whilst the twelfth century revival of classical logic and reason was very significant in terms of Western intellectual progress generally, the development of empirical scientific methods were in my view the most important intellectual endeavor of the European thirteenth century; and Bacon’s contribution to this was greater than that of Grosseteste because he devised general methodological principles for later scientists to build upon.


 Primary sources

Bacon, Roger, Opus Majus. a Translation by Robert Belle Burke. (New York, Russell & Russell, 1962).

Grosseteste, Robert, Commentarius in Posteriorum Analyticorum Libros. In Pasnau, Robert ‘Science and Certainty,’ R. Pasnau (ed.) Cambridge History of Medieval Philosophy (Cambridge: Cambridge University Press, 2010).

Secondary works

Colish, Marcia, L., Medieval foundations of the Western intellectual tradition (New Haven: Yale University Press, 1997).

Hackett, Jeremiah, ‘Roger Bacon’, The Stanford Encyclopedia of Philosophy (Spring 2015 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/spr2015/entries/roger-bacon/&gt;.

Kenny, Anthony Medieval Philosophy  (Oxford: Clarendon Press 2005).

Lewis, Neil, ‘Robert Grosseteste’, The Stanford Encyclopedia of Philosophy (Summer 2013 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/sum2013/entries/grosseteste/&gt;.

Luscombe, David, Medieval thought (Oxford: Oxford University Press, 1997).

Moran Cruz, Jo Ann and Richard Geberding, ‘The New Learning, 1050-1200’, in Medieval Worlds: An Introduction to European History, 300-1492 (Boston: Houghton Mifflin, 2004), pp.350-376.

Pasnau, Robert ‘Science and Certainty,’ in R. Pasnau (ed.) Cambridge History of Medieval Philosophy (Cambridge: Cambridge University Press, 2010).

Popper, Karl The Logic of Scientific Discovery. (London and New York 1959).


[1] Colish, Marcia, L., Medieval foundations of the Western intellectual tradition (New Haven: Yale University Press, 1997).pp.x-xi

[2] Moran Cruz, Jo Ann and Richard Geberding, ‘The New Learning, 1050-1200’, in Medieval Worlds: An Introduction to European History, 300-1492 (Boston: Houghton Mifflin, 2004), p.351.

[3] Colish, p.274.

[4] Pasnau, Robert ‘Science and Certainty,’ in R. Pasnau (ed.) Cambridge History of Medieval Philosophy (Cambridge: Cambridge University Press, 2010) p.357.

[5] Moran Cruz and Geberding p.351.

[6] Ibid. p.353

[7] Ibid. p. 356.

[8] Ibid, p.354.

[9] Colish, p.169.

[10] Colish, p.266.

[11] Colish, p.320.

[12] Luscombe, David, Medieval thought (Oxford: Oxford University Press, 1997). p.87.

[13] Colish, p.320.

[14] Luscombe, p.86.

[15] Grosseteste, Robert, Commentarius in Posteriorum Analyticorum Libros. In Pasnau, Robert ‘Science and Certainty,’ R. Pasnau (ed.) Cambridge History of Medieval Philosophy (Cambridge: Cambridge University Press, 2010) p. 358..

[16] Colish, p.320.

[17] Lewis, Neil, ‘Robert Grosseteste’, The Stanford Encyclopedia of Philosophy (Summer 2013 Edition), Edward N. Zalta (ed.),

[18] Kenny, Anthony Medieval Philosophy  (Oxford: Clarendon Press 2005). p.80.

[19] Colish, p.321.

[20] Colish, pp.321-322.

[21] Bacon, Roger Opus Majus. a Translation by Robert Belle Burke. (New York, Russell & Russell, 1962) p.583

[22] Hackett, Jeremiah, ‘Roger Bacon’, The Stanford Encyclopedia of Philosophy (Spring 2015 Edition), Edward N. Zalta (ed.), Section 5.4.3.

[23] Hackett, Section 5.4.3.

[24] Popper, Karl The Logic of Scientific Discovery.(London and New York 1959). Ch. 1.’…the theory to be developed in the following pages stands directly opposed to all attempts to operate with the ideas of inductive logic.’

Copyright notice: © All rights reserved. Except for personal use or as permitted under the Australian Copyright Act, no part of this website may be reproduced, stored in a retrieval system, communicated or transmitted in any form or by any means without prior written permission. All inquiries should be made to the copyright owner, Tim Harding at tim.harding@yandoo.com, or as attributed on individual blog posts.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

1 Comment

Filed under Essays and talks

The truth, the whole truth and … wait, how many truths are there?

The Conversation

Peter Ellerton, The University of Queensland

Calling something a “scientific truth” is a double-edged sword. On the one hand it carries a kind of epistemic (how we know) credibility, a quality assurance that a truth has been arrived at in an understandable and verifiable way.

On the other, it seems to suggest science provides one of many possible categories of truth, all of which must be equal or, at least, non-comparable. Simply put, if there’s a “scientific truth” there must be other truths out there. Right?

Let me answer this by reference to the fingernail-on-the-chalkboard phrase I’ve heard a little too often:

“But whose truth?”

If somebody uses this phrase in the context of scientific knowledge, it shows me they’ve conflated several incompatible uses of “truth” with little understanding of any of them.

As is almost always the case, clarity must come before anything else. So here is the way I see truth, shot from the hip.

Venture Vancouver

While philosophers talk about the coherence or correspondence theories of truth, the rest of us have to deal with another, more immediate, division: subjective, deductive (logical) and inductive (in this case, scientific) truth.

This has to do with how we use the word and is a very practical consideration. Just about every problem a scientist or science communicator comes across in the public understanding of “truth” is a function of mixing up these three things.

Subjective truth

Subjective truth is what is true about your experience of the world. How you feel when you see the colour red, what ice-cream tastes like to you, what it’s like being with your family, all these are your experiences and yours alone.

In 1974 the philosopher Thomas Nagel published a now-famous paper about what it might be like to be a bat. He points out that even the best chiropterologist in the world, knowledgeable about the mating, eating, breeding, feeding and physiology of bats, has no more idea of what it is like to be a bat than you or me.

Similarly, I have no idea what a banana tastes like to you, because I am not you and cannot ever be in your head to feel what you feel (there are arguments regarding common physiology and hence psychology that could suggest similarities in subjective experiences, but these are presently beyond verification).

What’s more, if you tell me your favourite colour is orange, there are absolutely no grounds on which I can argue against this – even if I felt inclined. Why would I want to argue, and what would I hope to gain? What you experience is true for you, end of story.

Deductive truth

Deductive truth, on the other hand, is that contained within and defined by deductive logic. Here’s an example:

Premise 1: All Gronks are green.
Premise 2: Fred is a Gronk.
Conclusion: Fred is green.

Even if we have no idea what a Gronk is, the conclusion of this argument is true if the premises are true. If you think this isn’t the case, you’re wrong. It’s not a matter of opinion or personal taste.


If you want to argue the case, you have to step out of the logical framework in which deductive logic operates, and this invalidates rational discussion. We might be better placed using the language of deduction and just call it “valid”, but “true” will do for now.

In my classes on deductive logic we talk about truth tables, truth trees, and use “true” and “false” in every second sentence and no one bats (cough) an eyelid, because we know what we mean when we use the word.

Using “true” in science, however, is problematic for much the same reason that using “prove” is problematic (and I have written about that on The Conversation before). This is a function of the nature of inductive reasoning.

Inductive truth

Induction works mostly through analogy and generalisation. Unlike deduction, it allows us to draw justified conclusions that go beyond the information contained in the premise. It is induction’s reliance on empirical observation that separates science from mathematics.

In observing one phenomenon occurring in conjunction with another – an electric current and an induced magnetic field, for instance – I generalise that this will always be so. I might even create a model, an analogy of the workings of the real world, to explain it – in this case that of particles and fields.

This then allows me to predict what future events might occur or to draw implications and create technologies, such as developing an electric motor.

And so I inductively scaffold my knowledge, using information I rely upon as a resource for further enquiry. At no stage do I arrive at deductive certainty, but I do enjoy greater degrees of confidence.

I might even speak about things being “true”, but, apart from simple observational statements about the world, I use the term as a manner of speech only to indicate my high level of confidence.

Now, there are some philosophical hairs to split here, but my point is not to define exactly what truth is, but rather to say there are differences in how the word can be used, and that ignoring or conflating these uses leads to a misunderstanding of what science is and how it works.

For instance, the lady that said to me it was true for her that ghosts exist was conflating a subjective truth with a truth about the external world.

I asked her if what she really meant was “it is true that I believe ghosts exist”. At first she was resistant, but when I asked her if it could be true for her that gravity is repulsive, she was obliging enough to accept my suggestion.


Such is the nature of many “it’s true for me” statements, in which the epistemic validity of a subjective experience is misleadingly extended to facts about the world.

Put simply, it smears the meaning of truth so much that the distinctions I have outlined above disappear, as if “truth” only means one thing.

This is generally done with the intent of presenting the unassailable validity of said subject experiences as a shield for dubious claims about the external world – claiming that homeopathy works “for me”, for instance. Attacking the truth claim is then, if you accept this deceit, equivalent to questioning the genuine subject experience.

Checkmate … unless you see how the rules have been changed.

It has been a long and painful struggle for science to rise from this cognitive quagmire, separating out subjective experience from inductive methodology. Any attempt to reunite them in the public understanding of science needs immediate attention.

Operating as it should, science doesn’t spend its time just making truth claims about the world, nor does it question the validity of subject experience – it simply says it’s not enough to make object claims that anyone else should believe.

Subjective truths and scientific truths are different creatures, and while they sometimes play nicely together, their offspring are not always fertile.

So next time you are talking about truth in a deductive or scientifically inductive way and someone says “but whose truths”, tell them a hard one: it’s not all about them.

The ConversationPeter Ellerton, Lecturer in Critical Thinking, The University of Queensland

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.


Filed under Reblogs

Why should we place our faith in science?

The Conversation

Jonathan Keith, Monash University

Most of us would like to think scientific debate does not operate like the comments section of online news articles. These are frequently characterised by inflexibility, truculence and expostulation. Scientists are generally a little more civil, but sometimes not much so!

There is a more fundamental issue here than politeness, though. Science has a reputation as an arbiter of fact above and beyond just personal opinion or bias. The term “scientific method” suggests there exists an agreed upon procedure for processing evidence which, while not infallible, is at least impartial.

So when even the most respected scientists can arrive at different deeply held convictions when presented with the same evidence, it undermines the perceived impartiality of the scientific method. It demonstrates that science involves an element of subjective or personal judgement.

Yet personal judgements are not mere occasional intruders on science, they are a necessary part of almost every step of reasoning about evidence.

Among the judgements scientists make on a daily basis are: what evidence is relevant to a particular question; what answers are admissible a priori; which answer does the evidence support; what standard of evidence is required (since “extraordinary claims require extraordinary evidence”); and is the evidence sufficient to justify belief?

Another judgement scientists make is whether the predictions of a model are sufficiently reliable to justify committing resources to a course of action.

We do not have universally agreed procedures for making any of these judgements. This should come as no surprise. Evidence is something experienced by persons, and a person is thus essential to relating evidence to the abstractions of a scientific theory.

This is true regardless of how directly the objects of a theory are experienced – whether we observe a bird in flight or its shadow on the ground – ultimately it is the unique neuronal configurations of an individual brain that determine how what we perceive influences what we believe.

Induction, falsification and probability

Nevertheless, we can ask: are there forms of reasoning about evidence that do not depend on personal judgement?

Induction is the act of generalising from particulars. It interprets a pattern observed in specific data in terms of a law governing a wider scope.

But induction, like any form of reasoning about evidence, demands personal judgement. Patterns observed in data invariably admit multiple alternative generalisations. And which generalisation is appropriate, if any, may come down to taste.

Many of the points of contention between Richard Dawkins and the late Stephen Jay Gould can be seen in this light. For example, Gould thought Dawkins too eager to attribute evolved traits to the action of natural selection in cases where contingent survival provides an alternative, and (to Gould) preferable, explanation.

One important statement of the problem of induction was made by 18th-century philosopher David Hume. He noted the only available justification for inductive reasoning is that it works well in practice. But this itself is an inductive argument, and thus “taking that for granted, which is the very point in question”.

Karl Popper wanted science to be based on the deductive reasoning of falsificationism rather than the inductive reasoning of verificationism. Lucinda Douglas-Menzies/Wikimedia

Hume thought we had to accept this circularity, but philosopher of science Karl Popper rejected induction entirely. Popper argued that evidence can only falsify a theory, never verify it. Scientific theories are thus only ever working hypotheses that have withstood attempts at falsification.

This characterisation of science has not prevailed, mainly because science has not historically proceeded in this manner, nor does it today. Thomas Kuhn observed that:

No process yet disclosed by the historical study of scientific development at all resembles the methodological stereotype of falsification by direct comparison with nature.

Scientists cherish their theories, having invested so much of their personal resources in them. So when a seemingly contradictory datum emerges, they are inclined to make minor adjustments rather than reject core tenets. As physicist Max Planck observed (before Popper or Kuhn):

A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it.

Falsification also ignores the relationship between science and engineering. Technology stakes human lives and personal resources on the reliability of scientific theories. We could not do this without strong belief in their adequacy. Engineers thus demand more from science than a working hypothesis.

Some philosophers of science look to probabilistic reasoning to place science above personal judgement. Prominent proponents of such approaches include Elliot Sober and Edwin Thompson Jaynes. By these accounts one can compare competing scientific theories in terms of the likelihood of observed evidence under each.

However, probabilistic reasoning does not remove personal judgement from science. Rather, it channels it into the design of models. A model, in this sense, is a mathematical representation of the probabilistic relationships between theory and evidence.

As someone who designs such models for a living, I can tell you the process relies heavily on personal judgement. There are no universally applicable procedures for model construction. Consequently, the point at issue in scientific controversies may be precisely how to model the relationship between theory and evidence.

What is (and isn’t) special about science

Does acknowledging the role played by personal judgement erode our confidence in science as a special means of acquiring knowledge? It does, if what we thought was special about science is that it removes the personal element from the search for truth.

As scientists – or as defenders of science – we must guard against the desire to dominate our interlocutors by ascribing to science a higher authority than it plausibly possesses. Many of us have experienced the frustration of seeing science ignored or distorted in arguments about climate change or vaccinations to name just two.

But we do science no favours by misrepresenting its claim to authority; instead we create a monster. A misplaced faith in science can and has been used as a political weapon to manipulate populations and impose ideologies.

Instead we need to explain science in terms that non-scientists can understand, so that factors that have influenced our judgements can influence theirs.

It is appropriate that non-scientists subordinate their judgements to that of experts, but this deference must be earned. The reputation of an individual scientist for integrity and quality of research is thus crucial in public discussions of science.

I believe science is special, and deserves the role of arbiter that society accords it. But its specialness does not derive from a unique mode of reasoning.

Rather it is the minutiae of science that make it special: the collection of lab protocols, recording practices, publication and peer review standards and many others. These have evolved over centuries under constant pressure to produce useful and reliable knowledge.

Thus, by a kind of natural selection, science has acquired a remarkable capacity to reveal truth. Science continues to evolve, so that what is special about science today might not be what will be special about it tomorrow.

So how much faith should you put in the conclusions of scientists? Judge for yourself!

The ConversationJonathan Keith, Associate Professor, School of Mathematical Sciences, Monash University

This article was originally published on The Conversation. (Reblogged by permission) . Read the original article.

Leave a comment

Filed under Reblogs

What is logic?

The word ‘logic‘ is not easy to define, because it has slightly different meanings in various applications ranging from philosophy, to mathematics to computer science. In philosophy, logic’s main concern is with the validity or cogency of arguments. The essential difference between informal logic and formal logic is that informal logic uses natural language, whereas formal logic (also known as symbolic logic) is more complex and uses mathematical symbols to overcome the frequent ambiguity or imprecision of natural language.

So what is an argument? In everyday life, we use the word ‘argument’ to mean a verbal dispute or disagreement (which is actually a clash between two or more arguments put forward by different people). This is not the way this word is usually used in philosophical logic, where arguments are those statements a person makes in the attempt to convince someone of something, or present reasons for accepting a given conclusion. In this sense, an argument consist of statements or propositions, called its premises, from which a conclusion is claimed to follow (in the case of a deductive argument) or be inferred (in the case of an inductive argument). Deductive conclusions usually begin with a word like ‘therefore’, ‘thus’, ‘so’ or ‘it follows that’.

A good argument is one that has two virtues: good form and all true premises. Arguments can be either deductiveinductive  or abductive. A deductive argument with valid form and true premises is said to be sound. An inductive argument based on strong evidence is said to be cogent. The term ‘good argument’ covers all three of these types of arguments.

Deductive arguments

A valid argument is a deductive argument where the conclusion necessarily follows from the premises, because of the logical structure of the argument. That is, if the premises are true, then the conclusion must also be true. Conversely, an invalid argument is one where the conclusion does not logically follow from the premises. However, the validity or invalidity of arguments must be clearly distinguished from the truth or falsity of its premises. It is possible for the conclusion of a valid argument to be true, even though one or more of its premises are false. For example, consider the following argument:

Premise 1: Napoleon was German
Premise 2: All Germans are Europeans
Conclusion: Therefore, Napoleon was European

The conclusion that Napoleon was European is true, even though Premise 1 is false. This argument is valid because of its logical structure, not because its premises and conclusion are all true (which they are not). Even if the premises and conclusion were all true, it wouldn’t necessarily mean that the argument was valid. If an argument has true premises and its form is valid, then its conclusion must be true.

Deductive logic is essentially about consistency. The rules of logic are not arbitrary, like the rules for a game of chess. They exist to avoid internal contradictions within an argument. For example, if we have an argument with the following premises:

Premise 1: Napoleon was either German or French
Premise 2: Napoleon was not German

The conclusion cannot logically be “Therefore, Napoleon was German” because that would directly contradict Premise 2. So the logical conclusion can only be: “Therefore, Napoleon was French”, not because we know that it happens to be true, but because it is the only possible conclusion if both the premises are true. This is admittedly a simple and self-evident example, but similar reasoning applies to more complex arguments where the rules of logic are not so self-evident. In summary, the rules of logic exist because breaking the rules would entail internal contradictions within the argument.

Inductive arguments

An inductive argument is one where the premises seek to supply strong evidence for (not absolute proof of) the truth of the conclusion. While the conclusion of a sound deductive argument is supposed to be certain, the conclusion of a cogent inductive argument is supposed to be probable, based upon the evidence given. An example of an inductive argument is: 

Premise 1: Almost all people are taller than 26 inches
Premise 2: George is a person
Conclusion: Therefore, George is almost certainly taller than 26 inches

Whilst an inductive argument based on strong evidence can be cogent, there is some dispute amongst philosophers as to the reliability of induction as a scientific method. For example, by the problem of induction, no number of confirming observations can verify a universal generalization, such as ‘All swans are white’, yet it is logically possible to falsify it by observing a single black swan.

Abductive arguments

Abduction may be described as an “inference to the best explanation”, and whilst not as reliable as deduction or induction, it can still be a useful form of reasoning. For example, a typical abductive reasoning process used by doctors in diagnosis might be: “this set of symptoms could be caused by illnesses X, Y or Z. If I ask some more questions or conduct some tests I can rule out X and Y, so it must be Z.

Incidentally, the doctor is the one who is doing the abduction here, not the patient. By accepting the doctor’s diagnosis, the patient is using inductive reasoning that the doctor has a sufficiently high probability of being right that it is rational to accept the diagnosis. This is actually an acceptable form of the Argument from Authority (only the deductive form is fallacious).


Hodges, W. (1977) Logic – an introduction to elementary logic (2nd ed. 2001) Penguin, London.
Lemmon, E.J. (1987) Beginning Logic. Hackett Publishing Company, Indianapolis.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button


Filed under Essays and talks