Fitch’s paradox of knowability is one of the fundamental puzzles of epistemic logic. It provides a challenge to the knowability thesis, which states that every truth is, in principle, knowable. The paradox is that this assumption implies the omniscience principle, which asserts that every truth is known. Essentially, Fitch’s paradox asserts that the existence of an unknown truth is unknowable. So if all truths were knowable, it would follow that all truths are in fact known. The paradox is of concern for verificationist or anti-realist accounts of truth, for which the knowability thesis is very plausible, but the omniscience principle is very implausible.
A formal proof of the paradox is as follows.
Suppose p is a sentence which is an unknown truth; that is, the sentence p is true, but it is not known that p is true. In such a case, the sentence “the sentence p is an unknown truth” is true; and, if all truths are knowable, it should be possible to know that “p is an unknown truth”. But this isn’t possible, because as soon as we know “p is an unknown truth”, we know that p is true, rendering p no longer an unknown truth, so the statement “p is an unknown truth” becomes a falsity. Hence, the statement “p is an unknown truth” cannot be both known and true at the same time. Therefore, if all truths are knowable, the set of “all truths” must not include any of the form “something is an unknown truth”; thus there must be no unknown truths, and thus all truths must be known.
The proof has been used to argue against versions of anti-realism committed to the thesis that all truths are knowable. For clearly there are unknown truths; individually and collectively we are non-omniscient. So, by the main result, it is false that all truths are knowable. The result has also been used to draw more general lessons about the limits of human knowledge. Still others have taken the proof to be fallacious, since it collapses an apparently moderate brand of anti-realism into an obviously implausible and naive idealism.
Calling something a “scientific truth” is a double-edged sword. On the one hand it carries a kind of epistemic (how we know) credibility, a quality assurance that a truth has been arrived at in an understandable and verifiable way.
On the other, it seems to suggest science provides one of many possible categories of truth, all of which must be equal or, at least, non-comparable. Simply put, if there’s a “scientific truth” there must be other truths out there. Right?
Let me answer this by reference to the fingernail-on-the-chalkboard phrase I’ve heard a little too often:
“But whose truth?”
If somebody uses this phrase in the context of scientific knowledge, it shows me they’ve conflated several incompatible uses of “truth” with little understanding of any of them.
As is almost always the case, clarity must come before anything else. So here is the way I see truth, shot from the hip.
While philosophers talk about the coherence or correspondence theories of truth, the rest of us have to deal with another, more immediate, division: subjective, deductive (logical) and inductive (in this case, scientific) truth.
This has to do with how we use the word and is a very practical consideration. Just about every problem a scientist or science communicator comes across in the public understanding of “truth” is a function of mixing up these three things.
Subjective truth is what is true about your experience of the world. How you feel when you see the colour red, what ice-cream tastes like to you, what it’s like being with your family, all these are your experiences and yours alone.
In 1974 the philosopher Thomas Nagel published a now-famous paper about what it might be like to be a bat. He points out that even the best chiropterologist in the world, knowledgeable about the mating, eating, breeding, feeding and physiology of bats, has no more idea of what it is like to be a bat than you or me.
Similarly, I have no idea what a banana tastes like to you, because I am not you and cannot ever be in your head to feel what you feel (there are arguments regarding common physiology and hence psychology that could suggest similarities in subjective experiences, but these are presently beyond verification).
What’s more, if you tell me your favourite colour is orange, there are absolutely no grounds on which I can argue against this – even if I felt inclined. Why would I want to argue, and what would I hope to gain? What you experience is true for you, end of story.
Deductive truth, on the other hand, is that contained within and defined by deductive logic. Here’s an example:
Premise 1: All Gronks are green. Premise 2: Fred is a Gronk. Conclusion: Fred is green.
Even if we have no idea what a Gronk is, the conclusion of this argument is true if the premises are true. If you think this isn’t the case, you’re wrong. It’s not a matter of opinion or personal taste.
If you want to argue the case, you have to step out of the logical framework in which deductive logic operates, and this invalidates rational discussion. We might be better placed using the language of deduction and just call it “valid”, but “true” will do for now.
In my classes on deductive logic we talk about truth tables, truth trees, and use “true” and “false” in every second sentence and no one bats (cough) an eyelid, because we know what we mean when we use the word.
Using “true” in science, however, is problematic for much the same reason that using “prove” is problematic (and I have written about that on The Conversation before). This is a function of the nature of inductive reasoning.
Induction works mostly through analogy and generalisation. Unlike deduction, it allows us to draw justified conclusions that go beyond the information contained in the premise. It is induction’s reliance on empirical observation that separates science from mathematics.
In observing one phenomenon occurring in conjunction with another – an electric current and an induced magnetic field, for instance – I generalise that this will always be so. I might even create a model, an analogy of the workings of the real world, to explain it – in this case that of particles and fields.
This then allows me to predict what future events might occur or to draw implications and create technologies, such as developing an electric motor.
And so I inductively scaffold my knowledge, using information I rely upon as a resource for further enquiry. At no stage do I arrive at deductive certainty, but I do enjoy greater degrees of confidence.
I might even speak about things being “true”, but, apart from simple observational statements about the world, I use the term as a manner of speech only to indicate my high level of confidence.
Now, there are some philosophical hairs to split here, but my point is not to define exactly what truth is, but rather to say there are differences in how the word can be used, and that ignoring or conflating these uses leads to a misunderstanding of what science is and how it works.
For instance, the lady that said to me it was true for her that ghosts exist was conflating a subjective truth with a truth about the external world.
I asked her if what she really meant was “it is true that I believe ghosts exist”. At first she was resistant, but when I asked her if it could be true for her that gravity is repulsive, she was obliging enough to accept my suggestion.
Such is the nature of many “it’s true for me” statements, in which the epistemic validity of a subjective experience is misleadingly extended to facts about the world.
Put simply, it smears the meaning of truth so much that the distinctions I have outlined above disappear, as if “truth” only means one thing.
This is generally done with the intent of presenting the unassailable validity of said subject experiences as a shield for dubious claims about the external world – claiming that homeopathy works “for me”, for instance. Attacking the truth claim is then, if you accept this deceit, equivalent to questioning the genuine subject experience.
Checkmate … unless you see how the rules have been changed.
It has been a long and painful struggle for science to rise from this cognitive quagmire, separating out subjective experience from inductive methodology. Any attempt to reunite them in the public understanding of science needs immediate attention.
Operating as it should, science doesn’t spend its time just making truth claims about the world, nor does it question the validity of subject experience – it simply says it’s not enough to make object claims that anyone else should believe.
Subjective truths and scientific truths are different creatures, and while they sometimes play nicely together, their offspring are not always fertile.
So next time you are talking about truth in a deductive or scientifically inductive way and someone says “but whose truths”, tell them a hard one: it’s not all about them.
The Barbershop paradox was proposed by Lewis Carroll in a three-page essay titled “A Logical Paradox,” which appeared in the July 1894 issue of Mind. The name comes from the ‘ornamental’ short story that Carroll uses to illustrate the paradox (although it had appeared several times in more abstract terms in his writing and correspondence before the story was published). Carroll claimed that it illustrated “a very real difficulty in the Theory of Hypotheticals” in use at the time. Modern logicians would not regard it as a paradox but simply as a logical error on the part of Carroll.
Briefly, the story runs as follows: Uncle Joe and Uncle Jim are walking to the barber shop. There are three barbers who live and work in the shop—Allen, Brown, and Carr—but not all of them are always in the shop. Carr is a good barber, and Uncle Jim is keen to be shaved by him. He knows that the shop is open, so at least one of them must be in. He also knows that Allen is a very nervous man, so that he never leaves the shop without Brown going with him. Uncle Joe insists that Carr is certain to be in, and then claims that he can prove it logically. Uncle Jim demands the proof. Uncle Joe reasons as follows.
Suppose that Carr is out. If Carr is out, then if Allen is also out Brown would have to be in, since someone must be in the shop for it to be open. However, we know that whenever Allen goes out he takes Brown with him, and thus we know as a general rule that if Allen is out, Brown is out. So if Carr is out then the statements “if Allen is out then Brown is in” and “if Allen is out then Brown is out” would both be true at the same time.
Uncle Joe notes that this seems paradoxical; the two “hypotheticals” seem “incompatible” with each other. So, by contradiction, Carr must logically be in.
However, the correct conclusion to draw from the incompatibility of the two “hypotheticals” is that what is hypothesised in them (– that Allen is out) must be false under our assumption that Carr is out. Then our logic simply allows us to arrive at the conclusion “If Carr is out, then Allen must necessarily be in”.
In modern logic theory this scenario is not a paradox. The law of implication reconciles what Uncle Joe claims are incompatible hypotheticals. This law states that “if X then Y” is logically identical to “X is false or Y is true” (¬X ∨ Y). For example, given the statement “if you press the button then the light comes on”, it must be true at any given moment that either you have not pressed the button, or the light is on.
In short, what obtains is not that ¬C yields a contradiction, only that it necessitates A, because ¬A is what actually yields the contradiction.
In this scenario, that means Carr doesn’t have to be in, but that if he isn’t in, Allen has to be in.
A more detailed discussion of this apparent paradox may be found on Wikipedia.
A false dilemma, or false dichotomy, is a logical fallacy that involves presenting two opposing views, options or outcomes in such a way that they seem to be the only possibilities: that is, if one is true, the other must be false, or, more typically, if you do not accept one then the other must be accepted. The reality in most cases is that there are many in-between or other alternative options, not just two mutually exclusive ones.
The logical form of this fallacy is as follows:
Premise 1: Either Claim X is true or Claim Y is true (when claims X and Y could both be false).
Premise 2: Claim Y is false.
Conclusion: Therefore Claim X is true.
This line of reasoning is fallacious because if both claims could be false, then it cannot be inferred that one is true because the other is false. This is made clear by the following example:
Either 1+1=4 or 1+1=12. It is not the case that 1+1=4. Therefore 1+1=12.
This fallacy should not be confused with the Law of Excluded Middle, where ‘true’ or ‘false’ are actually the only possible alternatives for a proposition.
It is worth noting that it is not a false dilemma to present two options out of many if no conclusion is drawn based on their exclusivity. For example ‘you can have tea or coffee’ is not a false dilemma. A fallacious form would require it be presented as an argument such as ‘you don’t want tea, therefore you must want coffee.’
For example, if somebody was to appear to demonstrate psychic abilities, one would commit the fallacy of false dilemma if one were to reason as follows: either she’s a fraud or she is truly psychic, and she’s not a fraud; so, she must be truly psychic. There is at least one other possible explanation for her claim of psychic abilities: she genuinely thinks she’s psychic but she’s not.
If you find the information on this blog useful, you might like to consider supporting us.
Amid all the dire warnings that machines run by artificial intelligence (AI) will one day take over from humans we need to think more about how we program them in the first place.
The technology may be too far off to seriously entertain these worries – for now – but much of the distrust surrounding AI arises from misunderstandings in what it means to say a machine is “thinking”.
One of the current aims of AI research is to design machines, algorithms, input/output processes or mathematical functions that can mimic human thinking as much as possible.
We want to better understand what goes on in human thinking, especially when it comes to decisions that cannot be justified other than by drawing on our “intuition” and “gut-feelings” – the decisions we can only make after learning from experience.
Consider the human that hires you after first comparing you to other job-applicants in terms of your work history, skills and presentation. This human-manager is able to make a decision identifying the successful candidate.
If we can design a computer program that takes exactly the same inputs as the human-manager and can reproduce its outputs, then we can make inferences about what the human-manager really values, even if he or she cannot articulate their decision on who to appoint other than to say “it comes down to experience”.
This kind of research is being carried out today and applied to understand risk-aversion and risk-seeking behaviour of financial consultants. It’s also being looked at in the field of medical diagnosis.
These human-emulating systems are not yet being asked to make decisions, but they are certainly being used to help guide human decisions and reduce the level of human error and inconsistency.
Fuzzy sets and AI
One promising area of research is to utilise the framework of fuzzy sets. Fuzzy sets and fuzzy logic were formalised by Lotfi Zadeh in 1965 and can be used to mathematically represent our knowledge pertaining to a given subject.
In everyday language what we mean when accusing someone of “fuzzy logic” or “fuzzy thinking” is that their ideas are contradictory, biased or perhaps just not very well thought out.
But in mathematics and logic, “fuzzy” is a name for a research area that has quite a sound and straightforward basis.
The starting point for fuzzy sets is this: many decision processes that can be managed by computers traditionally involve truth values that are binary: something is true or false, and any action is based on the answer (in computing this is typically encoded by 0 or 1).
For example, our human-manager from the earlier example may say to human resources:
IF the job applicant is aged 25 to 30
AND has a qualification in philosophy OR literature
THEN arrange an interview.
This information can all be written into a hiring algorithm, based on true or false answers, because an applicant either is between 25 and 30 or is not, they either do have the qualification or they do not.
But what if the human-manager is somewhat more vague in expressing their requirements? Instead, the human-manager says:
IF the applicant is tall
THEN the salary offered should be higher.
The problem HR faces in encoding these requests into the hiring algorithm is that it involves a number of subjective concepts. Even though height is something we can objectively measure, how tall should someone be before we call them tall?
Attractiveness is also subjective, even if we only account for the taste of the single human-manager.
Grey areas and fuzzy sets
In fuzzy sets research we say that such characteristics are fuzzy. By this we mean that whether something belongs to a set or not, whether a statement is true or false, can gradually increase from 0 to 1 over a given range of values.
One of the hardest things in any fuzzy-based software application is how best to convert observed inputs (someone’s height) into a fuzzy degree of membership, and then further establish the rules governing the use of connectives such as AND and OR for that fuzzy set.
To this day, and likely in years or decades into the future, the rules for this transition are human-defined. For example, to specify how tall someone is, I could design a function that says a 190cm person is tall (with a truth value of 1) and a 140cm person is not tall (or tall with a truth value of 0).
Then from 140cm, for every increase of 5cm in height the truth value increases by 0.1. So a key feature of any AI system is that we, normal old humans, still govern all the rules concerning how values or words are defined. More importantly, we define all the actions that the AI system can take – the “THEN” statements.
An area called computing with words, takes the idea further by aiming for seamless communication between a human user and an AI computer algorithm.
For the moment, we still need to come up with mathematical representations of subjective terms such as “tall”, “attractive”, “good” and “fast”. Then we need to design a function for combining such comments or commands, followed by another mathematical definition for turning the result we get back into an output like “yes he is tall”.
In conceiving the idea of computing with words, researchers envisage a time where we might have more access to base-level expressions of these terms, such as the brain activity and readings when we use the term “tall”.
This would be an amazing leap, although mainly in terms of the technology required to observe such phenomena (the number of neurons in the brain, let alone synapses between them, is somewhere near the number of galaxies in the universe).
Even so, designing machines and algorithms that can emulate human behaviour to the point of mimicking communication with us is still a long way off.
In the end, any system we design will behave as it is expected to, according to the rules we have designed and program that governs it.
An irrational fear?
This brings us back to the big fear of AI machines turning on us in the future.
The real danger is not in the birth of genuine artificial intelligence –- that we will somehow manage to create a program that can become self-aware such as HAL 9000 in the movie 2001: A Space Odyssey or Skynet in the Terminator series.
The real danger is that we make errors in encoding our algorithms or that we put machines in situations without properly considering how they will interact with their environment.
These risks, however, are the same that come with any human-made system or object.
So if we were to entrust, say, the decision to fire a weapon to AI algorithms (rather than just the guidance system), then we might have something to fear.
Not a fear that these intelligent weapons will one day turn on us, but rather that we programmed them – given a series of subjective options – to decide the wrong thing and turn on us.
Even if there is some uncertainty about the future of “thinking” machines and what role they will have in our society, a sure thing is that we will be making the final decisions about what they are capable of.
When programming artificial intelligence, the onus is on us (as it is when we design skyscrapers, build machinery, develop pharmaceutical drugs or draft civil laws), to make sure it will do what we really want it to.
Does philosophy make progress? Of course, but it does so differently from, say, science. Here is a brief conceptual history of how philosophy evolved over time, from the all-purpose approach of the ancient Greeks to the highly specialized academic discipline it is today. Written and narrated by philosopher Massimo Pigliucci.