Tag Archives: Dunning-Kruger effect

Why anti-vaxxers get it so wrong

By Tim Harding

The inability to accurately appraise one’s own knowledge is a cognitive bias known as the Dunning-Kruger Effect, first identified from social psychology experiments conducted in 1999. Dunning-Kruger effects occur when individuals’ lack of knowledge about a particular subject leads them to inaccurately gauge their expertise on that subject. Ignorance of one’s own ignorance can lead people who lack knowledge on a subject to think of themselves as more expert than those who are comparatively better informed.

A recent study published in the peer-reviewed journal Social Science and Medicine (and summarised in The Conversation) demonstrated that at least some anti-vaccination views are based on the Dunning-Kruger Effect.  The study found that 71 per cent of those who strongly endorse misinformation about the link between vaccines and autism feel that they know as much or more than medical experts about the causes of autism, compared to only 28 per cent who most strongly reject that misinformation.

The researchers found that nearly a third, or 30 percent, of people who think that they know more than medical experts about the causes of autism strongly support giving parents the latitude to not vaccinate their children. By contrast, 16 percent of those who do not think that they know more than medical professionals felt the same way.

The study also found that people who think they know more than medical experts are more likely to trust information about vaccines from non-expert sources, such as celebrities. These individuals are also more likely to support a strong role for non-experts in the process of making policies about vaccines and vaccination.

Whilst these recent research findings may not come as a surprise to seasoned skeptics, we now have  empirical evidence to explain why at least some anti-vaccination views are so irrational.

1 Comment

Filed under Logical fallacies, Reblogs

David Dunning on the Dunning-Kruger Effect

The Dunning–Kruger effect is a cognitive bias in which low-ability individuals suffer from illusory superiority, mistakenly assessing their ability as much higher than it really is. Dunning and Kruger attributed this bias to a metacognitive inability of those of low ability to recognize their ineptitude and evaluate their ability accurately. Their research also suggests corollaries: high-ability individuals may underestimate their relative competence and may erroneously assume that tasks which are easy for them are also easy for others. In Dr. Dunning’s own words:

“In 1999, in the Journal of Personality and Social Psychology, my then graduate student Justin Kruger and I published a paper that documented how, in many areas of life, incompetent people do not recognize — scratch that, cannot recognize — just how incompetent they are, a phenomenon that has come to be known as the Dunning-Kruger effect. Logic itself almost demands this lack of self-insight: For poor performers to recognize their ineptitude would require them to possess the very expertise they lack…”

“What’s curious is that, in many cases, incompetence does not leave people disoriented, perplexed, or cautious. Instead, the incompetent are often blessed with an inappropriate confidence, buoyed by something that feels to them like knowledge.”

“Some of our most stubborn misbeliefs arise … from the very values and philosophies that define who we are as individuals. Each of us possesses certain foundational beliefs — narratives about the self, ideas about the social order—that essentially cannot be violated: To contradict them would call into question our very self-worth. As such, these views demand fealty from other opinions. And any information that we glean from the world is amended, distorted, diminished, or forgotten in order to make sure that these sacrosanct beliefs remain whole and unharmed.”

1 Comment

Filed under Quotations

One Nation’s Malcolm Roberts is in denial about the facts of climate change

The Conversation

John Cook, The University of Queensland

The notion that climate science denial is no longer a part of Australian politics was swept away yesterday by One Nation Senator-Elect Malcolm Roberts.

In his inaugural press conference, Roberts claimed that “[t]here’s not one piece of empirical evidence anywhere, anywhere, showing that humans cause, through CO₂ production, climate change”.

He also promoted conspiracy theories that the CSIRO and Bureau of Meteorology are corrupt accomplices in climate conspiracy driven by the United Nations.

His claims conflict with many independent lines of evidence for human-caused global warming. Coincidentally, the University of Queensland is releasing a free online course this month examining the psychology and techniques of climate science denial. The very first video lecture addresses Roberts’ central claim, summarising the empirical evidence that humans are causing climate change.

Consensus of Evidence (from Denial101x course)

Scientists have observed various human fingerprints in recent climate change, documented in many peer-reviewed scientific papers.

Satellites measure less heat escaping to space at the exact wavelengths at which CO₂ absorbs energy. The upper atmosphere is cooling at the same time that the lower atmosphere is warming – a distinct pattern unique to greenhouse warming. Human activity is also changing the very structure of the atmosphere.

Human fingerprints in climate change. Skeptical Science

Not only do these unique fingerprints confirm humanity’s role in recent climate change, they also rule out other potential natural contributors. If the Sun caused global warming, we would expect to see days warming faster than nights, and summers warming faster than winters.

Instead we observe the opposite: nights are warming faster than days, and winters are warming faster than summers, which is a greenhouse pattern predicted by John Tyndall as long ago as 1859.

Similarly, if global warming were caused by internal variability, we would expect to see heat shuffling around the climate system with no net build-up. Instead, scientists observe our climate system accumulating heat at a rate of more than four atomic bombs per second.

Climate patterns confirm human causation and rule out natural causes. Skeptical Science

Our scientific understanding grows stronger when many independent lines of evidence all point to a single, consistent conclusion. In the case of climate change, the “consensus of evidence” has led 97% of climate scientists to agree that humans are causing global warming.

The scientific consensus on climate change has also been endorsed by many scientific organisations all over the world, including the national science academies of 80 countries.

National Academies of Science endorsing human-caused global warming. Skeptical Science

Is it a conspiracy?

How does one dismiss a global scientific consensus built on a robust body of empirical evidence?

There are five characteristics of science denial. These common traits are seen when people reject climate science, the benefits of vaccination, or the research linking smoking to cancer.

The techniques of denial are: fake experts; logical fallacies; impossible expectations; cherrypicking; and conspiracy theories. This is summarised in the acronym FLICC.

The five characteristics of science denial (from Denial101x course)

Climate science denial and conspiratorial thinking are often found together. A well-known example is that of Donald Trump, who has dismissed climate change by blaming it on a Chinese conspiracy.

Tweet by Donald J. Trump @realDonaldTrump

Several studies have linked climate science denial and conspiratorial thinking. If a person disagrees with a global scientific consensus, they’ll typically believe that the scientists are all engaging in a conspiracy to deceive them.

Malcolm Roberts’ conspiracy theories have been well documented and were once again on offer in yesterday’s speech. He espouses a conspiracy that encompasses the CSIRO, Bureau of Meteorology, international banking families, the United Nations and Al Gore.

Unfortunately, I am not optimistic that the evidence for human-caused global warming will persuade Malcolm Roberts. The scientific evidence from psychology tells us that scientific evidence is largely ineffective on those who dismiss climate science with conspiracy theories.

My own research found that communicating the science of climate change to those who exhibit conspiratorial thinking can even be counterproductive, activating their distrust of scientists and strengthening their denial of the evidence.

Furthermore, conspiratorial thinking is self-sealing. When conspiracy theorists are presented evidence that there is no conspiracy, they often respond by broadening the conspiracy to include that evidence. In other words, they interpret evidence against a conspiracy as evidence for the conspiracy.

Our course on climate science denial will be much more useful to those who are open to scientific evidence and curious about the research into the causes and impacts of climate change and the psychology of climate science denial.

The ConversationJohn Cook, Climate Communication Research Fellow, Global Change Institute, The University of Queensland

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

Leave a comment

Filed under Reblogs

Why we need to listen to the real experts in science

The Conversation

By Michael Clarke, La Trobe University and Susan Lawler, La Trobe University

If we want to use scientific thinking to solve problems, we need people to appreciate evidence and heed expert advice.

But the Australian suspicion of authority extends to experts, and this public cynicism can be manipulated to shift the tone and direction of debates. We have seen this happen in arguments about climate change.

This goes beyond the tall poppy syndrome. Disregard for experts who have spent years studying critical issues is a dangerous default position. The ability of our society to make decisions in the public interest is handicapped when evidence and thoughtfully presented arguments are ignored.

Anyone can claim to be an expert these days. Flickr/Alan Cleaver , CC BY

Anyone can claim to be an expert these days. Flickr/Alan Cleaver , CC BY

So why is science not used more effectively to address critical questions? We think there are several contributing factors including the rise of Google experts and the limited skills set of scientists themselves. We think we need non-scientists to help us communicate with and serve the public better.

At a public meeting recently, when a well-informed and feisty elderly participant asked a question that referred to some research, a senior public servant replied: “Oh, everyone has a scientific study to justify their position, there is no end to the studies you could cite, I am sure, to support your point of view.”

This is a cynical statement, where there are no absolute truths and everyone’s opinion must be treated as equally valid. In this intellectual framework, the findings of science can be easily dismissed as one of many conflicting views of reality.

Such a viewpoint is dangerous from our point of view.

When scientists disagree with one another, as they must to ensure progress in their field, it is easy to argue that it is not possible to distinguish between conflicting hypotheses. But scientists always agree that critical thinking done well eventually leads to a better understanding and superior solutions. All opinions are not equal.

If you are flying in an airplane at 30,000 feet, you will not be content with just any scientific study about whether the wing will stay on the plane. Most people will want to put their trust in the calculations of an expert aeronautical engineer who understands the physics of stresses on the wing.

So why do we not want to trust experts in bushfire management, or climate change? Because most people are happier with experts whose conclusions fit their own ideas.

This encourages people to express their opinions, and the internet allows those opinions to get a wide viewing. This makes for interesting times, but not always effective solutions.

Google experts

The internet is filled with information and ideas. Everyone can quickly find “answers”, and this means that everyone is an “expert”.

But using Google to find the answer to Trivial Pursuit questions is not the same as researching a complex question. Experts do have skills and one of those is the ability to use high quality sources, up to date theoretical frameworks, and critical thinking based on their experience in a particular field. This is why an expert’s answers are going to be more accurate and more nuanced than a novice.

For example, people who use Dr Google to diagnose their symptoms before visiting an actual doctor, sometimes ask to be tested for diseases they do not have, or waste time seeking a second opinion because they are convinced that their “research” has led them to a correct diagnosis. If it were really that easy, would doctors have to spend all those years in medical school?

There is another problem called the Dunning-Kruger effect, which states that “people who lack the knowledge or wisdom to perform well are often unaware of this fact”.

In other words, people who think all answers can be found on Google are likely to be unaware of the effort involved in solving complex problems, or why years of specialist training might help.

This is almost more dangerous than complete ignorance, because unlike Donald Rumsfeld, they don’t even know what they don’t know.

Easy access to huge volumes of confusing information sits very comfortably in a post-modern world. Unfortunately, the outcome is that most people are reluctant to do the intellectual hard work of sifting through competing hypotheses. So how are we to engage in robust scientific debates in such a public arena?

Science is not enough

It has been said many times that scientists need to communicate their research more broadly. The challenges are well known – peer reviewed scientific publications are necessary for our careers and time spent engaging with the public is time away from the field, our computers and laboratory benches.

Nevertheless, if we hope to influence government policy we cannot assume that the implications of our research will be understood by those who most need to know what we are doing.

Reaching out to busy bureaucrats and politicians is not something that comes naturally to scientists. To turn science into policy we need a diverse team of people with different but complementary skills who share a commitment to the task.

Skills that are not commonly found in scientists may be found in political scientists, lawyers, sociologists, public relations companies, the arts community and the media.

Forming relationships with people who can translate our findings into something that cannot be ignored may be critical to success.

Consider what we are up against, lobby groups with deep pockets have come up with brilliant assaults on the thoughtful management of our environment.

“Cutting Green Tape” or “No fuels, no fire” – these clever bits of spin threaten decades of rigorous research and policy development. This is not a failure of science, but a triumph of imagination. We have been dramatically out-manoeuvred, shown to be amateurs, in the world of presenting competing ideas.

At a recent fire forum we learned that current policy is: “Based on science, but driven by values.” This means that despite the best evidence, the values of our current society will decide when to act. This introduces another definition of truth seeking, based on who made the best argument in a political or legal process.

Science is meant to be done dispassionately and objectively, so scientists are not well equipped to participate in debates about values. This is the realm of ethicists, philosophers, artists and theologians.

But if we are passionate about applying the lessons learned from our research, we will need marketers, lobbyists, communication experts, accountants and economists. A multi-disciplinary team is required to convince society to change.

Perhaps the people with these complementary skills will be able to help break down the anti-intellectualism we face, for the benefit of all.


This is based on an address delivered by Professor Michael Clarke at the 2nd Biodiversity Forum held at the Royal Society of Victoria, Melbourne in 2014.

This article was originally published on The Conversation. (Reblogged with permission). Read the original article.

Leave a comment

Filed under Reblogs

Are you a poor logician? Logically, you might never know

The Conversation

By Stephan Lewandowsky, University of Bristol and Richard Pancost, University of Bristol

This is the second article in a series, How we make decisions, which explores our decision-making processes. How well do we consider all factors involved in a decision, and what helps and what holds us back?


It is an unfortunate paradox: if you’re bad at something, you probably also lack the skills to assess your own performance. And if you don’t know much about a topic, you’re unlikely to be aware of the scope of your own ignorance.

Type in any keyword into a scientific search engine and a staggering number of published articles appears. “Climate change” yields 238,000 hits; “tobacco lung cancer” returns 14,500; and even the largely unloved “Arion ater” has earned a respectable 245 publications.

Experts are keenly aware of the vastness of the knowledge landscape in their fields. Ask any scholar and they will likely acknowledge how little they know relative to what is knowable – a realisation that may date back to Confucius.

Here is the catch: to know how much more there is to know requires knowledge to begin with. If you start without knowledge, you also do not know what you are missing out on.

This paradox gives rise to a famous result in experimental psychology known as the Dunning-Kruger effect. Named after Justin Kruger and David Dunning, it refers to a study they published in 1999. They showed that the more poorly people actually performed, the more they over-estimated their own performance.

People whose logical ability was in the bottom 12% (so that 88 out of 100 people performed better than they did) judged their own performance to be among the top third of the distribution. Conversely, the outstanding logicians who outperformed 86% of their peers judged themselves to be merely in the top quarter (roughly) of the distribution, thereby underestimating their performance.

John Cleese argues that this effect is responsible for not only Hollywood but the actions of some mainstream media.

Ignorance is associated with exaggerated confidence in one’s abilities, whereas experts are unduly tentative about their performance. This basic finding has been replicated numerous times in many different circumstances. There is very little doubt about its status as a fundamental aspect of human behaviour.

Confidence and credibility

Here is the next catch: in the eyes of others, what matters most to judge a person’s credibility is their confidence. Research into the credibility of expert witnesses has identified the expert’s projected confidence as the most important determinant in judged credibility. Nearly half of people’s judgements of credibility can be explained on the basis of how confident the expert appears — more than on the basis of any other variable.

Does this mean that the poorest-performing — and hence most over-confident — expert is believed more than the top performer whose displayed confidence may be a little more tentative? This rather discomforting possibility cannot be ruled out on the basis of existing data.

But even short of this extreme possibility, the data on confidence and expert credibility give rise to another concern. In contested arenas, such as climate change, the Dunning-Kruger effect and its flow-on consequences can distort public perceptions of the true scientific state of affairs.

To illustrate, there is an overwhelming scientific consensus that greenhouse gas emissions from our economic activities are altering the Earth’s climate. This consensus is expressed in more than 95% of the scientific literature and it is shared by a similar fraction — 97-98% – of publishing experts in the area. In the present context, it is relevant that research has found that the “relative climate expertise and scientific prominence” of the few dissenting researchers “are substantially below that of the convinced researchers”.

Guess who, then, would be expected to appear particularly confident when they are invited to expound their views on TV, owing to the media’s failure to recognise (false) balance as (actual) bias? Yes, it’s the contrarian blogger who is paired with a climate expert in “debating” climate science and who thinks that hot brick buildings contribute to global warming.

‘I’m not an expert, but…’

How should actual experts — those who publish in the peer-reviewed literature in their area of expertise — deal with the problems that arise from Dunning-Kruger, the media’s failure to recognise “balance” as bias, and the fact that the public uses projected confidence as a cue for credibility?

Speaker of the US House of Representatives John Boehner admitted earlier this year he wasn’t qualified to comment on climate change.

We suggest two steps based on research findings.

The first focuses on the fact of a pervasive scientific consensus on climate change. As one of us has shown, the public’s perception of that consensus is pivotal in determining their acceptance of the scientific facts.

When people recognise that scientists agree on the climate problem, they too accept the existence of the problem. It is for this reason that Ed Maibach and colleagues, from the Centre for Climate Change Communication at George Mason University, have recently called on climate scientists to set the record straight and inform the public that there is a scientific consensus that human-caused climate change is happening.

One might object that “setting the record straight” constitutes advocacy. We do not agree; sharing knowledge is not advocacy and, by extension, neither is sharing the strong consensus behind that knowledge. In the case of climate change, it simply informs the public of a fact that is widely misrepresented in the media.

The public has a right to know that there is a scientific consensus on climate change. How the public uses that knowledge is up to them. The line to advocacy would be crossed only if scientists articulated specific policy recommendations on the basis of that consensus.

The second step to introducing accurate scientific knowledge into public debates and decision-making pertains precisely to the boundary between scientific advice and advocacy. This is a nuanced issue, but some empirical evidence in a natural-resource management context suggests that the public wants scientists to do more than just analyse data and leave policy decisions to others.

Instead, the public wants scientists to work closely with managers and others to integrate scientific results into management decisions. This opinion appears to be equally shared by all stakeholders, from scientists to managers and interest groups.

Advocacy or understanding?

In a recent article, we wrote that “the only unequivocal tool for minimising climate change uncertainty is to decrease our greenhouse gas emissions”. Does this constitute advocacy, as portrayed by some commenters?

It is not. Our statement is analogous to arguing that “the only unequivocal tool for minimising your risk of lung cancer is to quit smoking”. Both statements are true. Both identify a link between a scientific consensus and a personal or political action.

Neither statement, however, advocates any specific response. After all, a smoker may gladly accept the risk of lung cancer if the enjoyment of tobacco outweighs the spectre of premature death — but the smoker must make an informed decision based on the scientific consensus on tobacco.

Likewise, the global public may decide to continue with business as usual, gladly accepting the risk to their children and grandchildren – but they should do so in full knowledge of the risks that arise from the existing scientific consensus on climate change.

Some scientists do advocate for specific policies, especially if their careers have evolved beyond simply conducting science and if they have taken new or additional roles in policy or leadership.

Most of us, however, carefully limit our statements to scientific evidence. In those cases, it is vital that we challenge spurious accusations of advocacy, because such claims serve to marginalise the voices of experts.

Portraying the simple sharing of scientific knowledge with the public as an act of advocacy has the pernicious effect of silencing scientists or removing their expert opinion from public debate. The consequence is that scientific evidence is lost to the public and is lost to the democratic process.

But in one specific way we are advocates. We advocate that our leaders recognise and understand the evidence.

We believe that sober policy decisions on climate change cannot be made when politicians claim that they are not scientists while also erroneously claiming that there is no scientific consensus.

We advocate that our leaders are morally obligated to make and justify their decisions in light of the best available scientific, social and economic understanding.


Click on the links below for other articles in the series, How we make decisions:

The ConversationStephan Lewandowsky receives funding from the Royal Society, from the World University Network (WUN), and from the ‘Great Western 4’ (GW4) consortium of English universities.

Richard Pancost receives funding from RCUK, the EU and the Leverhulme Trust.

This article was originally published on The Conversation. (Republished with permission). Read the original article.

Leave a comment

Filed under Reblogs