Which is more important, a fact or an opinion on any given subject? It might be tempting to say the fact. But not so fast…
Lately, we find ourselves lamenting the post-truth world, in which facts seem no more important than opinions, and sometimes less so.
We also tend to see this as a recent devaluation of knowledge. But this is a phenomenon with a long history.
As the science fiction writer Issac Asimov wrote in 1980:
Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that “my ignorance is just as good as your knowledge”.
The view that opinions can be more important than facts need not mean the same thing as the devaluing of knowledge. It’s always been the case that in certain situations opinions have been more important than facts, and this is a good thing. Let me explain.
Not all facts are true
To call something a fact is, presumably, to make a claim that it is true. This isn’t a problem for many things, although defending such a claim can be harder than you think.
What we think are facts – that is, those things we think are true – can end up being wrong despite our most honest commitment to genuine inquiry.
It’s not only that facts can change that is a problem. While we might be happy to consider it a fact that Earth is spherical, we would be wrong to do so because it’s actually a bit pear-shaped. Thinking it a sphere, however, is very different from thinking it to be flat.
Asimov expressed this beautifully in his essay The Relativity of Wrong. For Asimov, the person who thinks Earth is a sphere is wrong, and so is the person who thinks the Earth is flat. But the person who thinks that they are equally wrong is more wrong than both.
Geometrical hair-splitting aside, calling something a fact is therefore not a proclamation of infallibility. It is usually used to represent the best knowledge we have at any given time.
It’s also not the knockout blow we might hope for in an argument. Saying something is a fact by itself does nothing to convince someone who doesn’t agree with you. Unaccompanied by any warrant for belief, it is not a technique of persuasion. Proof by volume and repetition – repeatedly yelling “but it’s a fact!” – simply doesn’t work. Or at least it shouldn’t.
Matters of fact and opinion
Then again, calling something an opinion need not mean an escape to the fairyland of wishful thinking. This too is not a knockout attack in an argument. If we think of an opinion as one person’s view on a subject, then many opinions can be solid.
For example, it’s my opinion that science gives us a powerful narrative to help understand our place in the Universe, at least as much as any religious perspective does. It’s not an empirical fact that science does so, but it works for me.
But we can be much clearer in our meaning if we separate things into matters of fact and matters of opinion.
Matters of fact are confined to empirical claims, such as what the boiling point of a substance is, whether lead is denser than water, or whether the planet is warming.
Matters of opinion are non-empirical claims, and include questions of value and of personal preference such as whether it’s ok to eat animals, and whether vanilla ice cream is better than chocolate. Ethics is an exemplar of a system in which matters of fact cannot by themselves decide courses of action.
Matters of opinion can be informed by matters of fact (for example, finding out that animals can suffer may influence whether I choose to eat them), but ultimately they are not answered by matters of fact (why is it relevant if they can suffer?).
Backing up the facts and opinions
Opinions are not just pale shadows of facts; they are judgements and conclusions. They can be the result of careful and sophisticated deliberation in areas for which empirical investigation is inadequate or ill-suited.
While it’s nice to think of the world so neatly divided into matters of fact and matters of opinion, it’s not always so clinical in its precision. For example, it is a fact that I prefer vanilla ice cream over chocolate. In other words, it is apparently a matter of fact that I am having a subjective experience.
But we can heal that potential rift by further restricting matters of fact to those things that can be verified by others.
While it’s true that my ice cream preference could be experimentally indicated by observing my behaviour and interviewing me, it cannot be independently verified by others beyond doubt. I could be faking it.
But we can all agree in principle on whether the atmosphere contains more nitrogen or carbon dioxide because we can share the methodology of inquiry that gives us the answer. We can also agree on matters of value if the case for a particular view is rationally persuasive.
Facts and opinions need not be positioned in opposition to each other, as they have complementary functions in our decision-making. In a rational framework, they are equally useful. But that’s just my opinion – it’s not a fact.
The motto of the Royal Society, Britain’s and perhaps the world’s oldest scientific society, is “nullius in verba” which it says translates as “take nobody’s word for it”.
This is a rejection of the idea that truth can be sought through authority. It is a call to turn to experimentation and direct engagement with the physical world to discover truth. A noble sentiment indeed.
It’s also one of the key arguments used by deniers of climate science in attempts to refute both that the world is warming and that this warming is a result of human activity (anthropogenic global warming, or AGW).
This is a common approach, exemplified by Australian Senator Malcolm Roberts in his many interviews on the subject.
It gives deniers an excuse to reject the overwhelming endorsement of science organisations around the world, including the Royal Society itself, and academies of science from more than 80 other countries, that AGW is a reality.
The argument is simple, and goes a bit like this. Science does not work by appeal to authority, but rather by the acquisition of experimentally verifiable evidence. Appeals to scientific bodies are appeals to authority, so should be rejected.
It is important to understand that the Royal Society was formed in 1660 in the shadow of a millennium of near-absolute church authority, including the general acceptance of Aristotelian natural philosophy.
The rebellion against this authority was also a celebration of the freedom to elevate the credibility of scientific exploration over that of church teachings and other accepted dogma.
Importantly, the authority to which the Royal Society’s motto alludes was a non-scientific one. The motto represents the superiority of verifiable empirical claims over claims driven by religious or political ideology. No motto could better represent the optimism of the times.
It is also important to understand that much of the science then undertaken was rather crude by modern standards and, by its reliance on very basic technology, was verifiable by individuals, or at least small groups of individuals.
The science of the 21st century is in most areas far too complex to be understood, let alone experimentally verified, by any one person. Science is now a vast collaborative web of information characterised by the dynamic interplay and testing of ideas on a global scale.
The sharing of experimental results and the collective scrutiny of ideas forge deep and complex understandings. Teams of scientists from a range of specialities are often required to interpret and use this knowledge.
The suggestion that a subject as complex as global warming, for example, could be verified by a single person, untrained and untutored in the norms of scientific inquiry, betrays a staggering ignorance about the nature of modern science.
It is also arrogant in its assumption that something not immediately obvious to oneself cannot be the case.
The non-fallacy of appealing to authority
It’s also worth pointing out that the recourse to authority is often presented as a fallacy of reasoning, the so-called “appeal to authority” fallacy.
But this is not the case. The fallacy would be more correctly named the “appeal to false authority” – for example when celebrities who are famous for their sporting or entertainment achievements are cited in support of a particular medical treatment.
Appeals to appropriate authorities, such as experts in their fields, are one of the glues that hold our technological society together. We go to our doctor for her expertise and we are happy to take her advice without the insistence that the efficacy of potential treatments be demonstrated to us there and then.
Engineers build impressively tall buildings, pilots fly incredibly complex machines, and business experts advise on financial markets. All this expertise is confidently assimilated into our lives because we recognise its value and legitimacy.
It is not fallacious reasoning to accept expert advice. We rely on the authority of experts for quality control in many areas, including the peer-review process of science and other academic disciplines.
Assuming that the motto of the Royal Society suggests we should not listen to the collective wisdom of scientists because science is not about respecting expertise is simply indefensible.
In fact, the role of many such societies in the 17th and 18th centuries was to act as a conduit between scientists and governments for the provision of expert advice.
If legitimate authorities are not to be consulted, presumably there is no point in having scientists around at all, as each person would need to verify any claim on their own terms and with their own resources. That would mean a speedy decline into very dark times indeed.
Deniers of climate science such as Senator Roberts are among those most in violation of the creed “nullius in verba”. Their continued insistence on “empirical evidence” while simultaneously rejecting it (usually through invoking some conspiracy theory) suggests an immature rationality at best, and outright duplicity at worst.
Their refusal to accept empirically verified evidence because it goes against their existing beliefs is the very stuff against which the Royal Society rebelled.
They may have a voice, but they have no authority in this debate.
Claims that the “the science isn’t settled” with regard to climate change are symptomatic of a large body of ignorance about how science works.
So what is the scientific method, and why do so many people, sometimes including those trained in science, get it so wrong?
The first thing to understand is that there is no one method in science, no one way of doing things. This is intimately connected with how we reason in general.
Science and reasoning
Humans have two primary modes of reasoning: deduction and induction. When we reason deductively, we tease out the implications of information already available to us.
For example, if I tell you that Will is between the ages of Cate and Abby, and that Abby is older than Cate, you can deduce that Will must be older than Cate.
That answer was embedded in the problem, you just had to untangle it from what you already knew. This is how Sudoku puzzles work. Deduction is also the reasoning we use in mathematics.
Inductive reasoning goes beyond the information contained in what we already know and can extend our knowledge into new areas. We induce using generalisations and analogies.
Generalisations include observing regularities in nature and imagining they are everywhere uniform – this is, in part, how we create the so-called laws of nature.
Generalisations also create classes of things, such as “mammals” or “electrons”. We also generalise to define aspects of human behaviour, including psychological tendencies and economic trends.
Analogies make claims of similarities between two things, and extend this to make new knowledge.
For example, if I find a fossilised skull of an extinct animal that has sharp teeth, I might wonder what it ate. I look for animals alive today that have sharp teeth and notice they are carnivores.
Reasoning by analogy, I conclude that the animal was also a carnivore.
Using induction and inferring to the best possible explanation consistent with the evidence, science teaches us more about the world than we could simply deduce.
Science and uncertainty
Most of our theories or models are inductive analogies with the world, or parts of it.
If inputs to my particular theory produce outputs that match those of the real world, I consider it a good analogy, and therefore a good theory. If it doesn’t match, then I must reject it, or refine or redesign the theory to make it more analogous.
If I get many results of the same kind over time and space, I might generalise to a conclusion. But no amount of success can prove me right. Each confirming instance only increases my confidence in my idea. As Albert Einstein famously said:
No amount of experimentation can ever prove me right; a single experiment can prove me wrong.
Einstein’s general and special theories of relativity (which are models and therefore analogies of how he thought the universe works) have been supported by experimental evidence many times under many conditions.
We have great confidence in the theories as good descriptions of reality. But they cannot be proved correct, because proof is a creature that belongs to deduction.
The hypothetico-deductive method
Science also works deductively through the hypothetico-deductive method.
It goes like this. I have a hypothesis or model that predicts that X will occur under certain experimental conditions. Experimentally, X does not occur under those conditions. I can deduce, therefore, that the theory is flawed (assuming, of course, we trust the experimental conditions that produced not-X).
Under these conditions, I have proved that my hypothesis or model is incorrect (or at least incomplete). I reasoned deductively to do so.
But if X does occur, that does not mean I am correct, it just means that the experiment did not show my idea to be false. I now have increased confidence that I am correct, but I can’t be sure.
If one day experimental evidence that was beyond doubt was to go against Einstein’s predictions, we could deductively prove, through the hypothetico-deductive method, that his theories are incorrect or incomplete. But no number of confirming instances can prove he is right.
That an idea can be tested by experiment, that there can be experimental outcomes (in principle) that show the idea is incorrect, is what makes it a scientific one, at least according to the philosopher of science Karl Popper.
As an example of an untestable, and hence unscientific position, take that held by Australian climate denialist and One Nation Senator Malcolm Roberts. Roberts maintains there is no empirical evidence of human-induced climate change.
Yet his claim that human-induced climate change is not occurring cannot be put to the test as he would not accept any data showing him wrong. He is therefore not acting scientifically. He is indulging in pseudoscience.
Settled does not mean proved
One of the great errors in the public understanding of science is to equate settled with proved. While Einstein’s theories are “settled”, they are not proved. But to plan for them not to work would be utter folly.
In scientific inquiry, the criterion of what is taken to be settled, or to be knowledge, is [of the science] being so settled that it is available as a resource in further inquiry; not being settled in such a way as not to be subject to revision in further inquiry.
Those who demand the science be “settled” before we take action are seeking deductive certainty where we are working inductively. And there are other sources of confusion.
One is that simple statements about cause and effect are rare since nature is complex. For example, a theory might predict that X will cause Y, but that Y will be mitigated by the presence of Z and not occur at all if Q is above a critical level. To reduce this to the simple statement “X causes Y” is naive.
Another is that even though some broad ideas may be settled, the details remain a source of lively debate. For example, that evolution has occurred is certainly settled by any rational account. But some details of how natural selection operates are still being fleshed out.
To confuse the details of natural selection with the fact of evolution is highly analogous to quibbles about dates and exact temperatures from modelling and researching climate change when it is very clear that the planet is warming in general.
When our theories are successful at predicting outcomes, and form a web of higher level theories that are themselves successful, we have a strong case for grounding our actions in them.
The mark of intelligence is to progress in an uncertain world and the science of climate change, of human health and of the ecology of our planet has given us orders of magnitude more confidence than we need to act with certitude.
Demanding deductive certainty before committing to action does not make us strong, it paralyses us.
It’s not uncommon to hear people applaud this approach because, after all, they “speak their mind”. But what is so good about speaking your mind if it’s a jumbled mess of self-contradiction?
Even if the stream-of-consciousness ramblings of Trump and Hanson, as two examples, are generally incoherent, could there be any good points worth exploring buried under the intellectual rubble? Either way, should we be listening?
Let me make the case for why these views should be heard, with attention to specific contexts and principles.
He highlighted the desirability for those in the public arena, and particularity those holding or vying for power, to spell out their thinking so that we can make up our own individual minds based on a rational analysis of the case rather than a simple appeal to emotions.
A necessary condition of this is that people not only speak their minds, but must lay out the reasoned argument that leads them to their position. It is the argument, not just the end position, that demands evaluation, for only through this process can we establish the credibility of the end point.
This requirement for a common language of rationality is, we hope, what leads to the best outcomes in the long run. It protects us from leaders acting on whims or in their own interests.
It’s also a bulwark against a world where only shouted slogans and appeals to fear make up the substance of public discourse. A world William Yeats glimpsed in his poem The Second Coming when he wrote of a time in which:
The best lack all conviction, while the worst are full of passionate intensity.
Divergence of opinion, in which people can simply speak their minds, and hopefully their thinking, is desirable. But this divergence must be followed by a phase of convergence in which alternative views are evaluated and consequently progressed or discarded based on collaboratively established norms of effective reasoning.
Each time we hear a poorly argued view, it should further inoculate us against accepting that view.
If arguments for particular positions with relevance to public life ought be exposed to public scrutiny, they must therefore be listened to and seriously engaged with by at least some people some of the time.
Listen for only so long
We do not, however, have the responsibility to elevate a view beyond the point it can attain through its own persuasiveness. Nor are we obliged to keep giving it our attention after its credibility is found wanting.
Appeals for another hearing without fresh arguments or evidence have no inherent right to be further entertained. Such is the nature of debate in young-earth creationism, anti-vaccination advocates or climate change denial, wherein the same old constantly refuted arguments come up again for another desperate gasp of public air.
It is fine to insist that an argument be evaluated on the proving ground of public reason. But it is an offence against that same principle to demand it stay on the playing field once it has been effectively refuted. A sure test of this unwarranted persistence is the degree to which reasoned argument is replaced by tub-thumping, fear-mongering and appeals to the status quo.
People are free to keep saying what they like but, as I have written before, they should not mistake the right to speak with the right to be heard once their case has already failed to convince.
The debate is therefore not silenced, but reaches closure through established, socially moderated processes of analysis and evaluation. All else is cheer-leading in an attempt to convince others that you are still on the field. But the rest of us are entitled to just go home.
Who decides what becomes public?
This all sounds quite rational, but who are the gatekeepers of the public arena? This is a complex issue. In an ideal world, the entry ticket would be a reasoned case in the public interest, but too many box seats have been pre-sold to vested interests.
So we see media companies such as News Corp pushing arguments against climate science that have long been discredited. And across the board news items and personalities that are sensational rather than significant are placed front and centre.
Media coverage of those speaking publicly is always a decision, and it’s a decision that exposes bias. Not just for who is heard, but also for who is not heard.
We are not obliged to give someone attention, let alone credibility, simply because they are speaking in public. The Enlightenment principles of public reasoning are conditional, and too often these conditions are not met or simply not understood.
But our acceptance and our rejection of views should always be a reflective practice, measured against long-established norms of rationality.
No one should be silenced, but that doesn’t mean everyone needs to be listened to.
When a group of Australians was asked why they believed climate change was not happening, about one in three (36.5%) said it was “common sense”, according to a report published last year by the CSIRO. This was the most popular reason for their opinion, with only 11.3% saying their belief that climate change was not happening was based on scientific research.
Interestingly, the same study found one in four (25.5%) cited “common sense” for their belief that climate change was happening, but was natural. And nearly one in five (18.9%) said it was “common sense” that climate change was happening and it was human-induced.
It seems the greater the rejection of climate science, the greater the reliance on common sense as a guiding principle.
But what do we mean by an appeal to common sense? Presumably it’s an appeal to rationality of some sort, perhaps a rationality that forms the basis of more complex reasoning. Whatever it is, we might understand it better by considering a few things about our psychology.
It’s only rational
It’s an interesting phenomenon that no one laments his or her lack of rationality. We might complain of having a poor memory, or of being no good at maths, but no one thinks they are irrational.
Worse than this, we all think we’re the exemplar of the rational person (go on, admit it) and, if only everyone could see the world as clearly as we do, then all would be well.
Rather than being thought of as the type of reasoning everyone would converge on after thoughtful reflection, however, common sense too often just means the kind of sense we individually have. And anyone who agrees with us must also, logically, have it.
[…] common sense is actually nothing more than a deposit of prejudices laid down in the mind prior to the age of eighteen.
In other words, common sense is indeed very common, it’s just that we all have a different idea of what it is.
Thinking that feels right
The appeal to common sense, therefore, is usually nothing more than an appeal to thinking that just feels right. But what feels right to one person may not feel right to another.
When we say to each other “that sounds right”, or “I like the sound of that”, we are generally not testing someone’s argument for validity and soundness as much as seeing if we simply like their conclusion.
Whether it feels right is usually a reflection of the world view and ideologies we have internalised, and that frame how we interact with new ideas. When new ideas are in accord with what we already believe, they are more readily accepted. When they are not, they, and the arguments that lead to them, are more readily rejected.
We too often mistake this automatic compatibility testing of new ideas with existing beliefs as an application of common sense. But, in reality, it is more about judging than thinking.
As the psychologist and Nobel Laureate Daniel Kahneman notes in his book Thinking Fast and Slow, when we arrive at conclusions in this way, the outcomes also feel true, regardless of whether they are. We are not psychologically well equipped to judge our own thinking.
We are also highly susceptible to a range of cognitive biases, such as the availability heuristic that preference the first things that come to mind when making decisions or giving weight to evidence.
One way we can check our internal biases and inconsistencies is through the social verification of knowledge, in which we test our ideas in a rigorous and systematic way to see if they make sense not just to us, but to other people. The outstanding example of this socially shared cognition is science.
It’s important to realise that science is not about common sense. Nowhere is this more evident than in the worlds of quantum mechanics and relativity, in which our common sense intuitions are hopelessly inadequate to deal with quantum unpredictability and space-time distortions.
But our common sense fails us even in more familiar territory. For centuries, it seemed to people that the Earth could not possibly be moving, and must therefore be at the centre of the universe.
Many students still assume that an object in motion through space must have a constant force acting on it, an idea that contradicts Newton’s first law. Some people think that the Earth has gravity because it spins.
And, to return to my opening comment, some people think that their common sense applied to observations of the weather carries more weight on climate change than the entire body of scientific evidence on the subject.
Science is not the embodiment of individual common sense, it is the exemplar of rational collaboration. These are very different things.
It is not that individual scientists are immune from the cognitive biases and tendencies to fool themselves that we are all subject to. It is rather that the process of science produces the checks and balances that prevent these individual flaws from flourishing as they do in some other areas of human activity.
In science, the highest unit of cognition is not the individual, it is the community of scientific enquiry.
Thinking well is a social skill
That does not mean that individuals are not capable of excellent thinking, nor does it mean no individual is rational. But the extent to which individuals can do this on their own is a function of how well integrated they are with communities of systematic inquiry in the first place. You can’t learn to think well by yourself.
In matters of science at least, those who value their common sense over methodological, collaborative investigation imagine themselves to be more free in their thinking, unbound by involvement with the group, but in reality they are tightly bound by their capabilities and perspectives.
We are smarter together than we are individually, and perhaps that’s just common sense.
Peter Ellerton will be online today, Tuesday February 2, 2016, to answer your questions or comments on common sense. Here are the times for Australia’s states and territories:
Belief in a flat Earth seems a bit like the attempt to eradicate polio – just when you think it’s gone, a pocket of resistance appears. But the “flat Earthers” have always been with us; it’s just that they usually operate under the radar of public awareness.
Now the rapper B.o.B has given the idea prominence through his tweets and the release of his single Flatline, in which he not only says the Earth is flat, but mixes in a slew of other weird and wonderful ideas.
These include the notions that the world is controlled by lizard people, that certain celebrities are cloned, that Freemasons manipulate our lives, that the sun revolves around the Earth and that the Illuminati control the new world order. Not bad for one song.
Even ignoring that these ideas are inconsistent (are we run by lizards, the Freemansons or the Illuminati?), what would inspire such a plethora of delusions? The answer is both straightforward, in that it is reasonably clear in psychological terms, and problematic, in that it can be hard to fix.
Making our own narratives
Humans are, above all things, story-telling animals. It is impossible to live our lives without constructing narratives. I could not present a word pair such as (cage, bird) without you joining them in a narrative or image. Same with (guitar, hand) or (river, bridge). Even when we read seemingly unrelated word pairs such as (pensioner, wardrobe), our brains actively try to match the two (and you’re still doing it).
The stories that define us as a culture, a group or as a species are often complex and multifaceted. They speak of many things, including creation, nature, community and progress.
We create stories for two reasons. The first is to provide explanatory power, to make causal sense of the world around us and help navigate through the landscapes of our lives. The second reason is to give us meaning and purpose.
Not only do we understand our world through stories, we understand our place in it. The stories can be religious, cultural or scientific, but serve the same purpose.
Our stories make sense of the world. The Elders/flickr
In science, our stories are developed over time and build on the work of others. The narrative of evolution, for example, provides breathtaking explanatory power. Without it, the world is simply a kaleidoscope of form and colour. With it, each organism has function and purpose.
Nothing in biology makes sense except in the light of evolution.
Through evolution, we have developed an understanding of how we fit into the scheme of life, and the vast and deep history of our planet. For many of us, this knowledge provides meaning and an appreciation of the fact of our existence.
Similarly, the story of our solar system’s formation is rich and compelling, and includes the explanation for why the Earth is, in fact, more or less spherical.
So why would someone reject all this?
One reason might be that accepting mainstream scientific findings necessitates rejecting an existing narrative. Such is the case for evolution within fundamentalist interpretations of the Bible.
For the literally religious, accepting evolution necessitates rejecting their world view. It is not about weighing scientific evidence, it is about maintaining the coherence and integrity of their narrative. The desperate and unsuccessful search for evidence to contradict evolution by young Earth creationists is a manifestation of this attempt at ideological purification.
Another reason to reject scientific narratives is that we feel we do not have meaning within them, or we do not belong to the community that created them.
As I’ve said elsewhere concerning conspiracy theories, in a world in which there is so much knowledge, and in which we individually hold so little of it, it is sometimes difficult to see ourselves as significant.
What’s more, science, it turns out, is hard. So if we want to own this narrative, it might take a bit of work.
Freedom from rationality
It is therefore tempting to find a way of thinking about the world that both dismisses the necessity of coming to grips with science, and restores us to a privileged social position.
Rejecting science and embracing an alternative view, such as the Earth being flat, moves the individual from the periphery of knowledge and understanding to a privileged position among those who know the “truth”.
In BoB’s lyrics, he calls himself “free thinking”. In this phrasing we see a glimpse of the warrant he gives himself to reject science, considering it a “cult”.
He appeals instead to his common sense to establish that the Earth must be flat. The appeal to common sense is a characteristic way of claiming to be rational while denying the collective rationality of the scientific community (and a typical argument in climate denial).
It’s also about recapturing a feeling of independence and control. We know from research that there is a correlation between feeling a lack of control in your life and belief in conspiracy theories.
If we can rise above the tide of mainstream thinking and find a place from which we can hold a unique and controversial view, we might hope to be more significant and find a purpose to which we can lend our talents.
Coming back from the edge
So how could we engage someone with such beliefs, with view to changing their minds? That’s no easy task, but two things are important.
The first is to have both the facts and their means of verification at hand – after all, you need something to point to. Sometimes, if the narrative is weak or in tension, that might do the job.
The second thing, because facts are often not enough, is to understand the style and depth of the narrative an individual has developed, and the reasons it’s developed as it has. It’s only from that point that progress can be made against otherwise intractable opposition to collective wisdom.
But why bother? Why not let rappers rap, preachers preach and deniers deny? It might seem that we are just dealing with a fringe on the edge of the rational (or literal) world. But, of course, in the case of things such as vaccination and climate change, the consequences of inaction against these views are potentially damaging.
Either way, we should at least stand up for knowledge that has been hard won through collective endeavours over generations and individual lives dedicated to its pursuit.
Because if all views are equal then all views are worthless, and that’s something none of us should accept.
The idea of a thinking machine is an amazing one. It would be like humans creating artificial life, only more impressive because we would be creating consciousness. Or would we?
It’s tempting to think that a machine that could think would think like us. But a bit of reflection shows that’s not an inevitable conclusion.
To begin with, we’d better be clear about what we mean by “think”. A comparison with human thinking might be intuitive, but what about animal thinking? Does a chimpanzee think? Does a crow? Does an octopus?
There may even be alien intelligences that we might not even recognise as such because they are so radically different from us. Perhaps we could pass each other in close proximity, each unaware that the other existed, having no way to engage.
Certainly animals other than humans have cognitive abilities geared towards understanding tools and causal relationships, communication, and even to recognising directed and purposeful thinking in others. We’d probably consider any or all of that thinking.
And let’s face it, if we built a machine that did all the above, we’d be patting ourselves on the back and saying “mission accomplished”. But could a machine go a step further and be like a human mind? What’s more, how would we know if it did?
Just because a computer acts like it has a mind, it doesn’t mean it must have one. It might be all show and no substance, an instance of a philosophical zombie.
It was this notion that motivated British codebreaker and mathematician Alan Turing to come up with his famous “Turing test”, in which a computer would interact with a human through a screen and, more often than not, have the human unsure it was a computer. For Turing, all that mattered was behaviour, there was no computational “inner life” to be concerned about.
But this inner life matters to some of us. The philosopher Thomas Nagel said that there was “something that it is like” to have conscious experiences. There’s something that it is like to see the colour red, or to go water skiing. We are more than just our brain states.
Could there ever be “something that it’s like” to be a thinking machine? In an imagined conversation with the first intelligent machine, a human might asked “Are you conscious?”, to which it might reply, “How would I know?”.
Is thinking just computation?
Under the hood of computer thinking, as we currently imagine it, is sheer computation. It’s about calculations per second and the number of potential computational pathways.
How can meat think?
But we are not at all sure that thinking or consciousness is a function of computation, at least the way a binary computer does it. Could thinking be more than just computation? What else is needed? And if it is all about computation, why is the human brain so bad at it?
Most of us are flat out multiplying a couple of two digit numbers in our heads, let alone performing trillions of calculations a second. Or is there some deep processing of data that goes on below our awareness that ultimately results in our arithmetically impaired consciousness (the argument of so-called Strong AI)?
Generally speaking, what computers are good at, like raw data manipulation, humans are quite bad at; and what computers are bad at, such as language, poetry, voice recognition, interpreting complex behaviour and making holistic judgements, humans are quite good at.
If the analogy between human and computer “thinking” is so bad, why expect computers to eventually think like us? Or might computers of the future lose their characteristic arithmetical aptitude as the full weight of consciousness emerges?
Belief, doubt and values
Then we have words like “belief” and “doubt” that are characteristic of human thinking. But what could it possibly mean for a computer to believe something, apart from the trivial meaning that it acted in ignorance of the possibility that it could be wrong? In other words, could a computer have genuine doubt, and then go ahead and act anyway?
When it comes to questions of value, questions about what we think is important in life and why, it’s interesting to consider two things. The first is if a thinking computer could be capable of attributing value to anything at all. The second is that if it could attribute value to anything, what would it choose? We’d want to be a bit careful here, it seems, even without getting into the possibility of mechanical free will.
It would be nice to program into computers a human style value system. But, on the one hand, we aren’t quite sure what that is, or how that could be done, and, on the other hand, if computers started programming themselves they may decide otherwise.
While it’s great fun to think about all this, we should spend a bit of time trying to understand what we want thinking computers to be. And maybe a bit more time should be spent trying to understand ourselves before we branch out.
How do you know the people billed as science experts that you see, hear and read about in the media are really all that credible? Or have they been included just to create a perception of balance in the coverage of an issue?
It’s a problem for any media and something the BBC’s Trust is trying to address in its latest report on science impartiality in programming.
As part of ongoing training, staff, particularly in non-news programs, were told that impartiality is not just about including a wide range of views on an issue, as this can lead to a “false balance”. This is the process of providing a platform for people whose views do not accord with established or dominant positions simply for the sake of seeming “balanced”.
It’s understandable that such false balance could grow from a desire to seem impartial, and particularly so since public broadcasters such as the BBC and the ABC in Australia are sensitive to claims of imbalance or bias.
Couple this with the need to negotiate the difficult ground of expert opinion, authentic balance and audience expectation, not to mention the always delicate tension between the imperatives of news and entertainment, and it hardly seems surprising that mistakes are made. An investigation this year found the ABC breached its own impartiality standards in its Catalyst program last year on statins and heart disease.
Finding the right balance
How then can journalists decide the best way to present a scientific issue to ensure accurate representation of the views of the community of experts? Indeed, how can any of us determine if what we are seeing in the media is balanced or a misrepresentation of expert opinion?
As I have written elsewhere, it is important to not confuse the right to be heard with an imagined right to be taken seriously. If an idea fails to survive in the community of experts, its public profile should diminish in proportion to its failure to generate consensus within that community.
A common reply to this is that science isn’t about consensus, it’s about the truth. This is so, but to use a consensus as evidence of error is fallacious reasoning.
While it’s true that some presently accepted notions have in the past been peripheral, the idea that simply being against the majority view equates to holding your intellectual ground in the best tradition of the enlightenment is ludicrous.
If all views are equal, then all views are worthless.
Were I to propose an idea free of testing or argument, I could not reasonably expect my idea to be as credible as those subject to rigorous experimentation and collaborative review. If such equality did exist then progress would be impossible, since progress is marked by the testing and rejection of ideas.
Defining an expert
In the case of science, this testing is the process of experimentation, data analysis and peer review. So if someone – scientist or otherwise – has not worked and published in an area, then they are not an expert in that area.
The first imperative for a journalist covering any story is to determine exactly in what field the issue best sits and then to seek advice from people who work and publish in that field.
Knowing how the issue fits into the broader picture of scientific investigation is very useful in determining this. It is one of the reasons that good science journalism follows from having journalists with some training in science.
Such a selection process, performed transparently, is an excellent defence against charges of bias.
Avoiding false balance
False balance can also be created by assuming that a person from outside the field (a non-expert) will somehow have a perspective that will shed light on an issue, that the real expert is too “caught up in the details” to be objective.
But suggesting that an expert is naive usually indicates an attempt at discrediting rather than truth seeking. Credibility is more about process than authority, and to be a recognised expert is to work within the process of science.
Also, if a piece of science is being criticised, we should ask if the criticism itself has been published. It’s not enough that someone with apparent authority casts doubt as this is simply an appeal to authority – an appeal that critics of mainstream science themselves use as a warrant to reject consensus.
A second journalistic imperative would be to recognise that not all issues are binary.
The metaphor that a coin has two sides is a powerful one, and the temptation to look at both sides of an issue is naturally strong. But the metaphor also assumes an equal weighting, and that both sides present the same space for discussion.
Proof and evidence
When an issue is genuinely controversial, the burden of proof is shared between opposing views. When a view is not mainstream, say that scientists are engaged in a conspiracy to defraud the public, the burden of proof sits with those promoting that view.
What can be asserted without evidence can also be dismissed without evidence.
Attempting to dishonestly shift the burden of proof is a common device in the push to have young earth creationism taught in science classrooms.
The idea of “teaching both sides” or that students should be allowed to make up their own minds seems again like a recourse to the most basic ideas of a liberal education, but is in reality an attempt to bypass expert consensus, to offload the burden of proof rather than own it.
The fact is, that for issues such as creationism, vaccination and that climate change is occurring and is a function of human activity, it’s not about journalists suppressing views, it’s about quality control of information.
Stay with the issue
A classic means of muddying the waters is to employ straw man arguments, in which the point at issue is changed to one more easily defended or better suited to a particular interest. Politicians are adept at doing this, dodging hard questions with statements like “the real issue is” or “what’s important to people is”.
An expert versus who?
Deniers of climate science often change the issue from global warming to whether or not consensus is grounds for acceptance (it alone is not, of course), or focus on whether a particular person is credible rather than discuss the literature at large.
The anti-vaccine lobby talks about “choice” rather than efficacy of health care.
Young earth creationists talk about the right to express all views rather than engage with the science. Politicians talk about anything except the question they were asked.
The third imperative, therefore, is to be very clear as to what the article or interview is about and stick to that topic. Moving off topic negates the presence of the experts (the desired effect) and gives unsubstantiated claims prominence.
The impartiality checklist
The best method of dealing with cranks, conspiracy theorists, ideologues and those with a vested interest in a particular outcome is the best method for science reporting in general:
insist on expertise
recognise where the burden of proof sits
stay focused on the point at issue.
If the media sticks to these three simple rules when covering science issues, impartiality and balance can be justifiably asserted.
Correction: This article was amended on July 17, 2014 to include a report of the BBC’s denial that a climate change sceptic was banned from the public broadcaster.
As scientists, one of our responsibilities should be to promote clarity. A lot of problems are caused by an incorrect or incomplete understanding of terms we regularly, and even lovingly, use.
When I use the word “evidence”, what I think I mean is a function of many things, not least my education in science and philosophy.
It’s also the product of many discussions with people about science, superstition, psychology, pseudoscience and subjectivity.
These discussions have added nuance to my understanding of the nature of evidence. They’ve also alerted me to the fact this nature changes in certain circumstances and through certain worldviews. In other words, what I intend to say is sometimes heard as something else entirely.
This type of miscommunication can be bad enough when dealing with someone who isn’t using the terms in a scientific way, but it’s particularly frustrating when it happens when talking to teachers and communicators of science.
I’d like to take a shot, then, at defining some key terms in the name of clarity.
People might think scientific law is about the highest sort of truth you can get; they might think something “proven” scientifically has the status of certainty, which is to say it’s always true: nature will always behave so as to be in accord with this law.
While in some way accurate, that interpretation is fundamentally flawed. It conflates (or worse, ignores) important concepts and creates a brittleness in the public conception of science that erodes confidence and trust.
First and foremost, laws in science are seldom proven: they are demonstrated, and they are demonstrated because they are demonstrable, which is to say they are descriptive.
Newton’s inverse square law of gravity outlines how the force of gravity between two massive objects varies with distance. Basically, if you double the distance, the force is reduced by a factor of four. Triple it and the force reduces by a factor of nine, and so on.
The same relationship with distance holds for the intensity of omnidirectional radiation, as shown below. What’s significant about a law like this is that while it describes the effect it does not really explain it.
Newton himself was famously silent on the question of what gravity was and why it would behave this way. To get an explanation of what gravity is, we needed Einstein. And we needed a theory.
General relativity explains the phenomena associated with gravity by postulating that the presence of mass warps, and hence affects movement through, space-time. This theory – or model – of how the universe works, when “run” through the process of mathematical calculation, produces outcomes that correspond to possible states of the world.
These states are checked against reality to test their veracity. The more times the model produces results that agree with observation, the more confidence we have in the model as an accurate representation of how the world works.
The example above shows nicely the difference between a model and a law: the former is a representation of reality, the latter a descriptive account.
It’s worth noting, of course, that “model” can be both a noun and a verb (and sometimes both at once). We can build a model of the solar system, or we can model weather on a computer. Either way, the terminology holds.
To put this another way, a law describes what happens and to what degree, but if we want to find out why it happens we need a theory – a model that represents reality.
A model can give us a more satisfying insight into the possible mechanisms of the universe – it’s an analogy (for rarely is it completely accurate) that betters our comprehension, as analogies are designed to do.
Both theories and laws have predictive power and are subject to being refining, falsified or confirmed; although in the case of laws refining is best done in the light of theoretical change (i.e. explaining the law by the theory/model).
Observing the law
We generalise to laws through observation, and support our generalisations with theoretical understanding. But it can be very tricky to determine that something is true in all cases (we can’t test the potential law in all possible places and at all possible times) or just happens to be true every time we check.
When stating something is universally true (even if parameters need to be defined), we must be very careful to determine whether we mean it’s true because it must be that way, or simply because it happens to be that way.
It may be a necessary condition of the universe that all like charges repel each other. But what about a generalisation such as “all posters are held up by drawing pins”?
The posters in my room and all those in my building are held up by drawing pins, but this hardly seems a necessary condition of posters: surely something else would do the job just as well. These are extreme examples, but many “laws” of nature may not be necessary laws – which seems to suggest they really shouldn’t be called laws in the first place.
Calling something a law certainly does not mean it is unchallengeable.
Laws do not develop from theories. To put it another way, theories do not become laws. I have thrown out science textbooks from several schools because they outline an unrealistic progression: from hypothesis to theory to law.
These three concepts are different creatures, and one does not morph into the other. One of the most significant misunderstandings in science exists because of this type of thinking.
In as much as science can make us sure of anything, we are sure evolution occurred in the manner generally accepted by evolutionary biologists; it is a fact about the world.
Darwin, as is generally known, developed a theory – a model – to explain evolution. This model is natural selection. It’s unfortunate that the lovely phrase “the theory of evolution by natural selection” has been truncated into the misleading, inaccurate, confusing and very wrong phrase “the theory of evolution” – including on this very website.
The “theory of evolution” is wrong for two reasons (when scientists use it they know of what they speak, but this is not my point). First, evolution is not the model – natural selection is. So we immediately conflate two very different ideas – that of evolution and the model of natural selection.
When added to the mistaken belief that theories become laws, adherents of young earth creationism (for there are really no other serious evolution opposers) can claim evolution as a tentative conclusion, akin to vague, hand-waving notions, that culminated in Ronald Reagan’s famous dismissal of evolution as “only a theory”.
The consequences for both the teaching of evolution and the credibility of science are enormous. And yet I have never seen a defender of science articulate this misunderstanding.
Just as a theory is a model, and law is a generalisation, a hypothesis is a statement about the world that could be true or false.
Moreover, the statement must be testable, which means it must be falsifiable, or inherently disprovable.
Phrased like this, hypotheses seem to have more in common with laws than they do with theories, considering that Newton could easily have hypothesised the inverse square law of gravity without going through any theoretical modelling of gravity.
But, of course, the creative act of devising a model of the universe, or a part of it, is to hypothesise that the world is really like that, and the hypothesis becomes that the model is an accurate representation.
Hypotheses, then, are ways of talking about building theories and laws, but not in the common way of theories being intermediate between hypotheses and laws.
While hypotheses can stand alone or inform both theories and laws, the interplay in practice between various hypotheses, theories and laws is web-like and complex and exists at nearly every level of operation from the experiment of the day to the paradigm of the century.
The idea of a hypothesis-to-theory-to-law progression is seriously flawed, and this needs to be articulated as the root cause of much misunderstanding.
“Prove” comes from the Latin probare, meaning “to test”. It’s also the origin of the word “probe”.
An older term – “proving ground” – for a testing area or trial shows we have not entirely lost that interpretation. But in the everyday use of the term, “proof” has come to indicate certitude.
What remains poorly understood is that “proof”, as such, is a deductive creature that really does not sit comfortably in science (at least not in an affirming sense). In mathematics a proof conveys that, within the bounds of the axioms in use, there is a truth to be discovered or a certainty to be expressed.
For its theoretical claims, and indeed for its laws, inductive science can only boast confirming instances.
He often spoke of the exquisite sensitivity of his theories to falsification, saying that it would not matter how many times experiment agreed with him, it had only to disagree once to prove him wrong (granted, of course, the validity of the experiment, as recent neutrino-based dramas have shown).
The simple fact that we can never test his theories under all conditions in all places at all times creates conclusions that are tentative, even though the level of confidence may be very high.
We may “prove” facts about the world, such as Earth being more or less spherical, but this does not extend to our laws and theories to the extent we might like to think.
So proof works best in science to falsify, not to affirm, though this is the opposite of common belief.
If we are clear on the above, we have a better appreciation of what makes an idea scientific, as opposed to pseudo-scientific.
We know that the best scientific hypotheses and theories are those with great explanatory power and high sensitivity to falsification, and that these are often the results of highly creative thinking, as are the experimental attempts to confirm or falsify them.
This is a very beautiful idea, but one that can’t be appreciated unless you know science does not spend its time stamping into place dry facts about the world, but grows as a vigorous and exhilarating human enterprise showcasing the best of collective human achievement.
Clarifying these ideas will, I hold, go a very long way indeed into increasing people’s understanding of science and their confidence in scientific findings.
Calling something a “scientific truth” is a double-edged sword. On the one hand it carries a kind of epistemic (how we know) credibility, a quality assurance that a truth has been arrived at in an understandable and verifiable way.
On the other, it seems to suggest science provides one of many possible categories of truth, all of which must be equal or, at least, non-comparable. Simply put, if there’s a “scientific truth” there must be other truths out there. Right?
Let me answer this by reference to the fingernail-on-the-chalkboard phrase I’ve heard a little too often:
“But whose truth?”
If somebody uses this phrase in the context of scientific knowledge, it shows me they’ve conflated several incompatible uses of “truth” with little understanding of any of them.
As is almost always the case, clarity must come before anything else. So here is the way I see truth, shot from the hip.
While philosophers talk about the coherence or correspondence theories of truth, the rest of us have to deal with another, more immediate, division: subjective, deductive (logical) and inductive (in this case, scientific) truth.
This has to do with how we use the word and is a very practical consideration. Just about every problem a scientist or science communicator comes across in the public understanding of “truth” is a function of mixing up these three things.
Subjective truth is what is true about your experience of the world. How you feel when you see the colour red, what ice-cream tastes like to you, what it’s like being with your family, all these are your experiences and yours alone.
In 1974 the philosopher Thomas Nagel published a now-famous paper about what it might be like to be a bat. He points out that even the best chiropterologist in the world, knowledgeable about the mating, eating, breeding, feeding and physiology of bats, has no more idea of what it is like to be a bat than you or me.
Similarly, I have no idea what a banana tastes like to you, because I am not you and cannot ever be in your head to feel what you feel (there are arguments regarding common physiology and hence psychology that could suggest similarities in subjective experiences, but these are presently beyond verification).
What’s more, if you tell me your favourite colour is orange, there are absolutely no grounds on which I can argue against this – even if I felt inclined. Why would I want to argue, and what would I hope to gain? What you experience is true for you, end of story.
Deductive truth, on the other hand, is that contained within and defined by deductive logic. Here’s an example:
Premise 1: All Gronks are green. Premise 2: Fred is a Gronk. Conclusion: Fred is green.
Even if we have no idea what a Gronk is, the conclusion of this argument is true if the premises are true. If you think this isn’t the case, you’re wrong. It’s not a matter of opinion or personal taste.
If you want to argue the case, you have to step out of the logical framework in which deductive logic operates, and this invalidates rational discussion. We might be better placed using the language of deduction and just call it “valid”, but “true” will do for now.
In my classes on deductive logic we talk about truth tables, truth trees, and use “true” and “false” in every second sentence and no one bats (cough) an eyelid, because we know what we mean when we use the word.
Using “true” in science, however, is problematic for much the same reason that using “prove” is problematic (and I have written about that on The Conversation before). This is a function of the nature of inductive reasoning.
Induction works mostly through analogy and generalisation. Unlike deduction, it allows us to draw justified conclusions that go beyond the information contained in the premise. It is induction’s reliance on empirical observation that separates science from mathematics.
In observing one phenomenon occurring in conjunction with another – an electric current and an induced magnetic field, for instance – I generalise that this will always be so. I might even create a model, an analogy of the workings of the real world, to explain it – in this case that of particles and fields.
This then allows me to predict what future events might occur or to draw implications and create technologies, such as developing an electric motor.
And so I inductively scaffold my knowledge, using information I rely upon as a resource for further enquiry. At no stage do I arrive at deductive certainty, but I do enjoy greater degrees of confidence.
I might even speak about things being “true”, but, apart from simple observational statements about the world, I use the term as a manner of speech only to indicate my high level of confidence.
Now, there are some philosophical hairs to split here, but my point is not to define exactly what truth is, but rather to say there are differences in how the word can be used, and that ignoring or conflating these uses leads to a misunderstanding of what science is and how it works.
For instance, the lady that said to me it was true for her that ghosts exist was conflating a subjective truth with a truth about the external world.
I asked her if what she really meant was “it is true that I believe ghosts exist”. At first she was resistant, but when I asked her if it could be true for her that gravity is repulsive, she was obliging enough to accept my suggestion.
Such is the nature of many “it’s true for me” statements, in which the epistemic validity of a subjective experience is misleadingly extended to facts about the world.
Put simply, it smears the meaning of truth so much that the distinctions I have outlined above disappear, as if “truth” only means one thing.
This is generally done with the intent of presenting the unassailable validity of said subject experiences as a shield for dubious claims about the external world – claiming that homeopathy works “for me”, for instance. Attacking the truth claim is then, if you accept this deceit, equivalent to questioning the genuine subject experience.
Checkmate … unless you see how the rules have been changed.
It has been a long and painful struggle for science to rise from this cognitive quagmire, separating out subjective experience from inductive methodology. Any attempt to reunite them in the public understanding of science needs immediate attention.
Operating as it should, science doesn’t spend its time just making truth claims about the world, nor does it question the validity of subject experience – it simply says it’s not enough to make object claims that anyone else should believe.
Subjective truths and scientific truths are different creatures, and while they sometimes play nicely together, their offspring are not always fertile.
So next time you are talking about truth in a deductive or scientifically inductive way and someone says “but whose truths”, tell them a hard one: it’s not all about them.