Tag Archives: falsity

Harry Frankfurt on the incoherence of truth denial

‘In any case, even those who profess to deny the validity or the objective reality of the true-false distinction continue to maintain, without apparent embarrassment, that this denial is a position that they do truly endorse. The statement that they reject the distinction between true and false is, they insist, an unqualified true statement about their beliefs, not a false one.

This prima facie incoherence in the articulation of their doctrine makes it uncertain precisely how to construe what it is that they propose to deny. It is also enough to make us wonder just how seriously we need to take their claim that there is no objectively meaningful or worthwhile distinction to be made between what is true and what is false.’


Frankfurt, Harry G. (2006) On Truth. Alfred A.Knopf, New York.


Filed under Quotations

The truth, the whole truth and … wait, how many truths are there?

The Conversation

Peter Ellerton, The University of Queensland

Calling something a “scientific truth” is a double-edged sword. On the one hand it carries a kind of epistemic (how we know) credibility, a quality assurance that a truth has been arrived at in an understandable and verifiable way.

On the other, it seems to suggest science provides one of many possible categories of truth, all of which must be equal or, at least, non-comparable. Simply put, if there’s a “scientific truth” there must be other truths out there. Right?

Let me answer this by reference to the fingernail-on-the-chalkboard phrase I’ve heard a little too often:

“But whose truth?”

If somebody uses this phrase in the context of scientific knowledge, it shows me they’ve conflated several incompatible uses of “truth” with little understanding of any of them.

As is almost always the case, clarity must come before anything else. So here is the way I see truth, shot from the hip.

Venture Vancouver

While philosophers talk about the coherence or correspondence theories of truth, the rest of us have to deal with another, more immediate, division: subjective, deductive (logical) and inductive (in this case, scientific) truth.

This has to do with how we use the word and is a very practical consideration. Just about every problem a scientist or science communicator comes across in the public understanding of “truth” is a function of mixing up these three things.

Subjective truth

Subjective truth is what is true about your experience of the world. How you feel when you see the colour red, what ice-cream tastes like to you, what it’s like being with your family, all these are your experiences and yours alone.

In 1974 the philosopher Thomas Nagel published a now-famous paper about what it might be like to be a bat. He points out that even the best chiropterologist in the world, knowledgeable about the mating, eating, breeding, feeding and physiology of bats, has no more idea of what it is like to be a bat than you or me.

Similarly, I have no idea what a banana tastes like to you, because I am not you and cannot ever be in your head to feel what you feel (there are arguments regarding common physiology and hence psychology that could suggest similarities in subjective experiences, but these are presently beyond verification).

What’s more, if you tell me your favourite colour is orange, there are absolutely no grounds on which I can argue against this – even if I felt inclined. Why would I want to argue, and what would I hope to gain? What you experience is true for you, end of story.

Deductive truth

Deductive truth, on the other hand, is that contained within and defined by deductive logic. Here’s an example:

Premise 1: All Gronks are green.
Premise 2: Fred is a Gronk.
Conclusion: Fred is green.

Even if we have no idea what a Gronk is, the conclusion of this argument is true if the premises are true. If you think this isn’t the case, you’re wrong. It’s not a matter of opinion or personal taste.


If you want to argue the case, you have to step out of the logical framework in which deductive logic operates, and this invalidates rational discussion. We might be better placed using the language of deduction and just call it “valid”, but “true” will do for now.

In my classes on deductive logic we talk about truth tables, truth trees, and use “true” and “false” in every second sentence and no one bats (cough) an eyelid, because we know what we mean when we use the word.

Using “true” in science, however, is problematic for much the same reason that using “prove” is problematic (and I have written about that on The Conversation before). This is a function of the nature of inductive reasoning.

Inductive truth

Induction works mostly through analogy and generalisation. Unlike deduction, it allows us to draw justified conclusions that go beyond the information contained in the premise. It is induction’s reliance on empirical observation that separates science from mathematics.

In observing one phenomenon occurring in conjunction with another – an electric current and an induced magnetic field, for instance – I generalise that this will always be so. I might even create a model, an analogy of the workings of the real world, to explain it – in this case that of particles and fields.

This then allows me to predict what future events might occur or to draw implications and create technologies, such as developing an electric motor.

And so I inductively scaffold my knowledge, using information I rely upon as a resource for further enquiry. At no stage do I arrive at deductive certainty, but I do enjoy greater degrees of confidence.

I might even speak about things being “true”, but, apart from simple observational statements about the world, I use the term as a manner of speech only to indicate my high level of confidence.

Now, there are some philosophical hairs to split here, but my point is not to define exactly what truth is, but rather to say there are differences in how the word can be used, and that ignoring or conflating these uses leads to a misunderstanding of what science is and how it works.

For instance, the lady that said to me it was true for her that ghosts exist was conflating a subjective truth with a truth about the external world.

I asked her if what she really meant was “it is true that I believe ghosts exist”. At first she was resistant, but when I asked her if it could be true for her that gravity is repulsive, she was obliging enough to accept my suggestion.


Such is the nature of many “it’s true for me” statements, in which the epistemic validity of a subjective experience is misleadingly extended to facts about the world.

Put simply, it smears the meaning of truth so much that the distinctions I have outlined above disappear, as if “truth” only means one thing.

This is generally done with the intent of presenting the unassailable validity of said subject experiences as a shield for dubious claims about the external world – claiming that homeopathy works “for me”, for instance. Attacking the truth claim is then, if you accept this deceit, equivalent to questioning the genuine subject experience.

Checkmate … unless you see how the rules have been changed.

It has been a long and painful struggle for science to rise from this cognitive quagmire, separating out subjective experience from inductive methodology. Any attempt to reunite them in the public understanding of science needs immediate attention.

Operating as it should, science doesn’t spend its time just making truth claims about the world, nor does it question the validity of subject experience – it simply says it’s not enough to make object claims that anyone else should believe.

Subjective truths and scientific truths are different creatures, and while they sometimes play nicely together, their offspring are not always fertile.

So next time you are talking about truth in a deductive or scientifically inductive way and someone says “but whose truths”, tell them a hard one: it’s not all about them.

The ConversationPeter Ellerton, Lecturer in Critical Thinking, The University of Queensland

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.


Filed under Reblogs

Harry Frankfurt ‘On Bullshit’


Filed under Videos

Argument from consequences

The Argument from Consequences, also known as (‘Appeal to Consequences’) or argumentum ad consequentiam [1], is a fallacious argument that concludes that a belief is either true or false based on whether the belief leads to desirable or undesirable consequences.  Such arguments are closely related to the fallacies of appeal to emotion and wishful thinking.  They generally have one of two forms:

Positive form

Premise 1: If P, then Q will occur.

Premise 2: Q is desirable.

Conclusion: Therefore, P is true.


  • Humans must be able to travel faster than light, because that will be necessary for interstellar space travel.
  • I believe in an afterlife, because I want to exist forever.

Negative form

Premise 1: If P, then Q will occur.

Premise 2: Q is undesirable.

Conclusion: Therefore, P is false.


  • Free will must exist: if it didn’t, we would all be machines.” (This is also a false dilemma.)
  • Evolution must be false: if it were true then human beings would be no better than animals.
  • God must exist; if He did not, then people would have no reason to be good and life would have no meaning.

Such arguments are invalid because the conclusion does not logically follow from the premises.  The desirability of a consequence does not make a conclusion true; nor does the undesirability of a consequence make a conclusion false.  Moreover, in categorizing consequences as either desirable or undesirable, such arguments inherently contain subjective points of view.

There are two types of cogent argument with which this fallacy is easily confused:

  1. When an argument is about a proposition, it is reasonable to assess the truth-value (whether it is true or false) of any logical consequences of the proposition.  Logical consequences should not be confused with causal consequences; and truth or falsity should not be confused with goodness or badness.
  2. When an argument concerns a policy or plan of action—instead of a proposition—then it is reasonable to consider the consequences of acting on it, because policies and plans are good or bad rather than true or false.


[1] Latin for ‘argument to the consequences’

If you find the information on this blog useful, you might like to consider making a donation.

Make a Donation Button


Filed under Logical fallacies

What is logic?

The word ‘logic‘ is not easy to define, because it has slightly different meanings in various applications ranging from philosophy, to mathematics to computer science. In philosophy, logic’s main concern is with the validity or cogency of arguments. The essential difference between informal logic and formal logic is that informal logic uses natural language, whereas formal logic (also known as symbolic logic) is more complex and uses mathematical symbols to overcome the frequent ambiguity or imprecision of natural language. Reason is the application of logic to actual premises, with a view to drawing valid or sound conclusions. Logic is the rules to be followed, independently of particular premises, or in other words using abstract premises designated by letters such as P and Q.

So what is an argument? In everyday life, we use the word ‘argument’ to mean a verbal dispute or disagreement (which is actually a clash between two or more arguments put forward by different people). This is not the way this word is usually used in philosophical logic, where arguments are those statements a person makes in the attempt to convince someone of something, or present reasons for accepting a given conclusion. In this sense, an argument consist of statements or propositions, called its premises, from which a conclusion is claimed to follow (in the case of a deductive argument) or be inferred (in the case of an inductive argument). Deductive conclusions usually begin with a word like ‘therefore’, ‘thus’, ‘so’ or ‘it follows that’.

A good argument is one that has two virtues: good form and all true premises. Arguments can be either deductiveinductive  or abductive. A deductive argument with valid form and true premises is said to be sound. An inductive argument based on strong evidence is said to be cogent. The term ‘good argument’ covers all three of these types of arguments.

Deductive arguments

A valid argument is a deductive argument where the conclusion necessarily follows from the premises, because of the logical structure of the argument. That is, if the premises are true, then the conclusion must also be true. Conversely, an invalid argument is one where the conclusion does not logically follow from the premises. However, the validity or invalidity of arguments must be clearly distinguished from the truth or falsity of its premises. It is possible for the conclusion of a valid argument to be true, even though one or more of its premises are false. For example, consider the following argument:

Premise 1: Napoleon was German
Premise 2: All Germans are Europeans
Conclusion: Therefore, Napoleon was European

The conclusion that Napoleon was European is true, even though Premise 1 is false. This argument is valid because of its logical structure, not because its premises and conclusion are all true (which they are not). Even if the premises and conclusion were all true, it wouldn’t necessarily mean that the argument was valid. If an argument has true premises and its form is valid, then its conclusion must be true.

Deductive logic is essentially about consistency. The rules of logic are not arbitrary, like the rules for a game of chess. They exist to avoid internal contradictions within an argument. For example, if we have an argument with the following premises:

Premise 1: Napoleon was either German or French
Premise 2: Napoleon was not German

The conclusion cannot logically be “Therefore, Napoleon was German” because that would directly contradict Premise 2. So the logical conclusion can only be: “Therefore, Napoleon was French”, not because we know that it happens to be true, but because it is the only possible conclusion if both the premises are true. This is admittedly a simple and self-evident example, but similar reasoning applies to more complex arguments where the rules of logic are not so self-evident. In summary, the rules of logic exist because breaking the rules would entail internal contradictions within the argument.

Inductive arguments

An inductive argument is one where the premises seek to supply strong evidence for (not absolute proof of) the truth of the conclusion. While the conclusion of a sound deductive argument is supposed to be certain, the conclusion of a cogent inductive argument is supposed to be probable, based upon the evidence given. An example of an inductive argument is: 

Premise 1: Almost all people are taller than 26 inches
Premise 2: George is a person
Conclusion: Therefore, George is almost certainly taller than 26 inches

Whilst an inductive argument based on strong evidence can be cogent, there is some dispute amongst philosophers as to the reliability of induction as a scientific method. For example, by the problem of induction, no number of confirming observations can verify a universal generalization, such as ‘All swans are white’, yet it is logically possible to falsify it by observing a single black swan.

Abductive arguments

Abduction may be described as an “inference to the best explanation”, and whilst not as reliable as deduction or induction, it can still be a useful form of reasoning. For example, a typical abductive reasoning process used by doctors in diagnosis might be: “this set of symptoms could be caused by illnesses X, Y or Z. If I ask some more questions or conduct some tests I can rule out X and Y, so it must be Z.

Incidentally, the doctor is the one who is doing the abduction here, not the patient. By accepting the doctor’s diagnosis, the patient is using inductive reasoning that the doctor has a sufficiently high probability of being right that it is rational to accept the diagnosis. This is actually an acceptable form of the Argument from Authority (only the deductive form is fallacious).


Hodges, W. (1977) Logic – an introduction to elementary logic (2nd ed. 2001) Penguin, London.
Lemmon, E.J. (1987) Beginning Logic. Hackett Publishing Company, Indianapolis.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button


Filed under Essays and talks

Argument from Popularity

by Tim Harding

The informal fallacy known as argumentum ad populum means ’argument from popularity’ or ‘appeal to the people’.  This fallacy is essentially the same as ad numerum, appeal to the gallery, appeal to the masses, common practice, past practice, traditional knowledge, peer pressure, conventional wisdom, the bandwagon fallacy; and lastly truth by consensus, of which I shall say more later.

The Argument from Popularity fallacy may be defined as when an advocate asserts that because the great majority of people in general agree with his or her position on an issue, he or she must be right.[1]  In other words, if you suggest too strongly that someone’s claim or argument is correct simply because it’s what most people believe, then you’ve committed the fallacy of appeal to the people.  Similarly, if you suggest too strongly that someone’s claim or argument is mistaken simply because it’s not what most people believe, then you’ve also committed the fallacy.

Agreement with popular opinion is not necessarily a reliable sign of truth, and deviation from popular opinion is not necessarily a reliable sign of error, but if you assume it is and do so with enthusiasm, then you’re guilty of committing this fallacy.  The ‘too strongly’ mentioned above is important in the description of the fallacy because what most everyone believes is, for that reason, often likely to be true, all things considered.  However, the fallacy occurs when this degree of support is used as justification for the truth of the belief.[2]

It often happens that a true proposition is believed to be true by most people, but this is not the reason it is true.  In other words, correlation does not imply causation, and this confusion is the source of the fallacy, in my view.  For example, nearly every sane person believes that the proposition 1+1=2 is true, but that is not why it is true.  We can try doing empirical experiments by counting objects, and although this exercise is highly convincing, it is still only inductive reasoning rather than proof.  Put simply, the proposition 1+1=2 is true because it has been mathematically proven to be true.  But my purpose here is not to convince you that 1+1=2.  My real point is that the proportion of people who believe that 1+1=2 is true is irrelevant to the truth or falsity of this proposition.

Let us now consider a belief where its truth is less obvious.  Before the work of Copernicus and Galileo in the 15th and 16th centuries, most people (including the Roman Catholic Church) believed that the Sun revolved around the Earth, rather than vice versa as we now know through science.  So the popular belief in that case was false.

This fallacy is also common in marketing e.g. “Brand X vacuum cleaners are the country’s most popular brand; so buy Brand X vacuum cleaners”.  How often have we heard a salesperson try to argue that because a certain product is very popular this year, we should buy it?  Not because it is a good quality product representing value for money, but simply because it is popular?  Weren’t those ‘power balance wrist bands’ also popular before they were exposed as a sham by the ACCC?[3]

For another example, a politician might say ‘Nine out of ten of my constituents oppose the bill, therefore it is bad legislation.’  Now, this might be a political reason for voting against the bill, but it is not a valid argument that the bill is bad legislation.  To validly argue that bill is bad legislation, the politician should adduce rational arguments against the bill on its merits or lack thereof, rather than merely claim that the bill is politically unpopular.

In philosophy, truth by consensus is the process of taking statements to be true simply because people generally agree upon them.  Philosopher Nigel Warburton argues that the truth by consensus process is not a reliable way of discovering truth.  That there is general agreement upon something does not make it actually true.  There are several reasons for this.

One reason Warburton discusses is that people are prone to wishful thinking.  People can believe an assertion and espouse it as truth in the face of overwhelming evidence and facts to the contrary, simply because they wish that things were so.  Another is that people are gullible, and easily misled.

Another unreliable method of determining truth is by determining the majority opinion of a popular vote.  This is unreliable because on many questions the majority of people are ill-informed.  Warburton gives astrology as an example of this.  He states that while it may be the case that the majority of the people of the world believe that people’s destinies are wholly determined by astrological mechanisms, given that most of that majority have only sketchy and superficial knowledge of the stars in the first place, their views cannot be held to be a significant factor in determining the truth of astrology.  The fact that something ‘is generally agreed or that ‘most people believe’ something should be viewed critically, asking the question why that factor is considered to matter at all in an argument over truth.  He states that the simple fact that a majority believes something to be true is unsatisfactory justification for believing it to be true.[4]

In contrast, rational arguments that the claims of astrology are false include firstly, because they are incompatible with science; secondly, because there is no credible causal mechanism by which they could possibly be true; thirdly, because there is no empirical evidence that they are true despite objective testing; and fourthly, because the star signs used by astrologers are all out of kilter with the times of the year and have been so for the last two or three thousand years.

Another example is the claims of so-called ‘alternative medicines’ where judging by their high sales figures relative to prescription medicines, it is quite possible that a majority of the population believe these claims to be true.  Without going into details here, we skeptics have good reasons for believing that many of these claims are false.

Warburton makes a distinction between the fallacy of truth by consensus and the process of democracy in decision making.  Descriptive statements of the way things are, are either true or false – and verifiable true statements are called facts.  Normative statements deal with the way things ought to be, and are neither true nor false.  In a political context, statements of the way things ought to be are known as policies.  Political policies may be described as good or bad, but not true or false.  Democracy is preferable to other political processes not because it results in truth, but because it provides for majority rule, equal participation by multiple special-interest groups, and the avoidance of tyranny.

In summary, the Argument from Popularity fallacy confuses correlation with causality; and thus popularity with truth.  Just because most people believe that a statement is true, it does not logically follow that the statement is in fact true.  With the exception of the demonstrably false claims of astrology and so-called ‘alternative medicines’, popular statements are often more likely to be true than false (‘great minds think alike’); but they are not necessarily true and can sometimes be false.  They are certainly not true merely because they are popular.  This fallacy is purely concerned with the logical validity of arguments and the justification for the truth of propositions.  The identification of this fallacy is not an argument against democracy or whether popular political policies should or should not be pursued.


Clark J. and Clark T., (2005) Humbug! The skeptic’s field guide to spotting fallacies in thinking Nifty Books, Capalaba.

[1] Clark and Clark, 2005.

[2] Feiser and Dowden et al, 2011.

[4] Warburton, 2000.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button


Filed under Logical fallacies