Tag Archives: Rod Lamberts

Book review: The Death of Expertise

The Conversation

File 20170619 28805 17lbqbv
A new book expresses concern that the ‘average American’ has base knowledge so low that it is now plummeting to ‘aggressively wrong’.
shutterstock

Rod Lamberts, Australian National University

I have to start this review with a confession: I wanted to like this book from the moment I read the title. And I did. Tom Nichols’ The Death of Expertise: The Campaign Against Established Knowledge and Why it Matters is a motivating – if at times slightly depressing – read.

In the author’s words, his goal is to examine:

… the relationship between experts and citizens in a democracy, why that relationship is collapsing, and what all of us, citizens and experts, might do about it.

This resonates strongly with what I see playing out around the world almost every day – from the appalling state of energy politics in Australia, to the frankly bizarre condition of public debate on just about anything in the US and the UK.

Nichols’ focus is on the US, but the parallels with similar nations are myriad. He expresses a deep concern that “the average American” has base knowledge so low it has crashed through the floor of “uninformed”, passed “misinformed” on the way down, and is now plummeting to “aggressively wrong”. And this is playing out against a backdrop in which people don’t just believe “dumb things”, but actively resist any new information that might threaten these beliefs.

He doesn’t claim this situation is new, per se – just that it seems to be accelerating, and proliferating, at eye-watering speed.

Intimately entwined with this, Nichols mourns the decay of our ability to have constructive, positive public debate. He reminds us that we are increasingly in a world where disagreement is seen as a personal insult. A world where argument means conflict rather than debate, and ad hominem is the rule rather than the exception.

Again, this is not necessarily a new issue – but it is certainly a growing one.

Oxford University Press

The book covers a broad and interconnected range of topics related to its key subject matter. It considers the contrast between experts and citizens, and highlights how the antagonism between these roles has been both caused and exacerbated by the exhausting and often insult-laden nature of what passes for public conversations.

Nichols also reflects on changes in the mediating influence of journalism on the relationship between experts and “citizens”. He reminds us of the ubiquity of Google and its role in reinforcing the conflation of information, knowledge and experience.

His chapter on the contribution of higher education to the ailing relationship between experts and citizens particularly appeals to me as an academic. Two of his points here exemplify academia’s complicity in diminishing this relationship.

Nichols outlines his concern about the movement to treat students as clients, and the consequent over-reliance on the efficacy and relevance of student assessment of their professors. While not against “limited assessment”, he believes:

Evaluating teachers creates a habit of mind in which the layperson becomes accustomed to judging the expert, despite being in an obvious position of having inferior knowledge of the subject material.

Nichols also asserts this student-as-customer approach to universities is accompanied by an implicit, and also explicit, nurturing of the idea that:

Emotion is an unassailable defence against expertise, a moat of anger and resentment in which reason and knowledge quickly drown. And when students learn that emotion trumps everything else, it is a lesson they will take with them for the rest of their lives.

The pervasive attacks on experts as “elitists” in US public discourse receive little sympathy in this book (nor should these). Nichols sees these assaults as entrenched not so much in ignorance, more as being rooted in:

… unfounded arrogance, the outrage of an increasingly narcissistic culture that cannot endure even the slightest hint of inequality of any kind.

Linked to this, he sees a confusion in the minds of many between basic notions of democracy in general, and the relationship between expertise and democracy in particular.

Democracy is, Nichols reminds us, “a condition of political equality”: one person, one vote, all of us equal in the eyes of the law. But in the US at least, he feels people:

… now think of democracy as a state of actual equality, in which every opinion is a good as any other on almost any subject under the sun. Feelings are more important than facts: if people think vaccines are harmful … then it is “undemocratic” and “elitist” to contradict them.

The danger, as he puts it, is that a temptation exists in democratic societies to become caught up in “resentful insistence on equality”, which can turn into “oppressive ignorance” if left unchecked. I find it hard to argue with him.

Nichols acknowledges that his arguments expose him to the very real danger of looking like yet another pontificating academic, bemoaning the dumbing down of society. It’s a practice common among many in academia, and one that is often code for our real complaint: that people won’t just respect our authority.

There are certainly places where a superficial reader would be tempted to accuse him of this. But to them I suggest taking more time to consider more closely the contexts in which he presents his arguments.

This book does not simply point the finger at “society” or “citizens”: there is plenty of critique of, and advice for, experts. Among many suggestions, Nichols offers four explicit recommendations.

  • The first is that experts should strive to be more humble.
  • Second, be ecumenical – and by this Nichols means experts should vary their information sources, especially where politics is concerned, and not fall into the same echo chamber that many others inhabit.
  • Three, be less cynical. Here he counsels against assuming people are intentionally lying, misleading or wilfully trying to cause harm with assertions and claims that clearly go against solid evidence.
  • Finally, he cautions us all to be more discriminating – to check sources scrupulously for veracity and for political motivations.

In essence, this last point admonishes experts to mindfully counteract the potent lure of confirmation bias that plagues us all.

It would be very easy for critics to cherry-pick elements of this book and present them out of context, to see Nichols as motivated by a desire to feather his own nest and reinforce his professional standing: in short, to accuse him of being an elitist. Sadly, this would be a prime example of exactly what he is decrying.

To these people, I say: read the whole book first. If it makes you uncomfortable, or even angry, consider why.

Have a conversation about it and formulate a coherent argument to refute the positions with which you disagree. Try to resist the urge to dismiss it out of hand or attack the author himself.

I fear, though, that as is common with a treatise like this, the people who might most benefit are the least likely to read it. And if they do, they will take umbrage at the minutiae, and then dismiss or attack it.

The ConversationUnfortunately we haven’t worked how to change that. But to those so inclined, reading this book should have you nodding along, comforted at least that you are not alone in your concern that the role of expertise is in peril.

Rod Lamberts, Deputy Director, Australian National Centre for Public Awareness of Science, Australian National University

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

1 Comment

Filed under Reblogs

Why don’t people get it? Seven ways that communicating risk can fail

The Conversation

Rod Lamberts, Australian National University

Many public conversations we have about science-related issues involve communicating risks: describing them, comparing them and trying to inspire action to avoid or mitigate them.

Just think about the ongoing stream of news and commentary on health, alternative energy, food security and climate change.

Good risk communication points out where we are doing hazardous things. It helps us better navigate crises. It also allows us to pre-empt and avoid danger and destruction.

But poor risk communication does the opposite. It creates confusion, helplessness and, worst of all, pushes us to actively work against each other even when it’s against our best interests to do so.

So what’s happening when risk communications go wrong?

People are just irrational and illogical

If you’re science-informed – or at least science-positive – you might confuse being rational with using objective, science-based evidence.

To think rationally is to base your thinking in reason or logic. But a conclusion that’s logical doesn’t have to be true. You can link flawed, false or unsubstantiated premises to come up with a logical-but-scientifically-unsubstantiated answer.

For example, in Australia a few summers back there was increase in the number of news reports of sharks attacking humans. This lead to some dramatic shark baiting and culling. The logic behind this reaction was something like:

  1. there have been more reports of shark attacks this year than before
  2. more reports means more shark attacks are happening
  3. more shark attacks happening means the risk of shark attack has increased
  4. we need to take new measures to keep sharks away from places humans swim to protect us from this increased risk.

You can understand the reasoning here, but it’s likely to have been based on flawed premises. Like not realising that one shark attack was not systematically linked to another (for example, some happened on different sides of the country). People here saw connections between events that probability suggests were actually random.

Prove it’s safe or we’ll say no

If people are already nervous about – or actively against – a risky proposition, one reaction is to demand proof of safety. But safety is a relative term and risk calculation doesn’t work that way.

To demand proof of safety is to demand certainty, and such a demand is scientifically impossible. Uncertainty is at the heart of the scientific method. Or rather, qualifying and communicating degrees of uncertainty is.

In reality, we live in a world where we have to agree on what constitutes acceptable risk, because we simply can’t provide proof of safety. To use an example I’ve noted before, we can’t prove orange juice is 100% safe, yet it remains defiantly on our supermarket shelves.

Don’t worry, this formula will calm your fears

You may have seen this basic risk calculation formula:

Risk (or hazard) = (the probability of something happening) × (the consequences of it happening)

This works brilliantly for insurance assessors and lab managers, but it quickly falls over when you use it to explain risk in the big bad world.

Everyday reactions to how bad a risk seems are more often ruled by the formula (hazard) × (outrage), where “outrage” is fuelled by non-technical, socially-driven matters.

Basically, the more outraged (horrified, frightened) we are by the idea of something happening, the more likely we are to consider it unacceptable, regardless of how statistically unlikely it might be.

The shark attack examples serves here, too. The consequences of being attacked by a shark are outrageous, and this horror colours our ability to keep the technical likelihood of an attack in perspective. The emotional reality of our feelings of outrage eclipse technical, detached risk calculations.

Significant means useful

Everyone who’s worked with statistics knows that statistical significance can be a confusing idea. For example, one study looked at potential links between taking aspirin everyday and the likelihood of having a heart attack.

Among the 22,000 people in the study, those who took daily aspirin were less likely to have a heart attack than those who didn’t, and the result was statistically significant.

Sounds like something worth paying attention to, until you discover that the difference in the likelihood of having a heart attack between those who were taking aspirin every day and those who weren’t was less than 1%.

Significance ain’t always significant.

Surely everyone understands percentages

It’s easy to appreciate that complex statistics and formulae aren’t the best tools for communicating risk beyond science-literate experts. But perhaps simple numbers – such as percentages – could help remove some of the confusion when talking about risk?

We see percentages everywhere – from store discounts, to weather forecasts telling you how likely it is to rain. But percentages can easily confuse, or at least slow people down.

Take this simple investment decision example. If you were offered a choice between the following three opportunities, which would you take?

  1. have your bank balance raised by 50% and then cut by 50%
  2. have your bank balance cut by 50% and then raised by 50%
  3. have your bank balance remain where it is

You probably got this right. But perhaps you didn’t. Or perhaps it took you longer than you’d expected to think it through. Don’t feel bad. (The answer is at the end of this article.)

I have used this in the classroom, and even science-literate university students can get it wrong, especially if they are asked to decide quickly.

Now imagine if these basic percentages were all you had to make a real, life-or-death decision (while under duress).

Just a few simple numbers could be helpful, couldn’t they?

Well actually, not always. Research into a phenomenon known as anchoring and adjustment shows that the mere presence of numbers can affect how likely or common we estimate something might be.

In this study, people were asked one of the following two questions:

  1. how many headaches do you have a month: 0, 1, 2?
  2. how many headaches do you have a month: 5, 10, 15?

Estimates were higher for responses to the second question, simply because the numbers used in the question to prompt their estimates were higher.

At least the experts are evidence-based and rational

Well, not necessarily. It turns out experts can be just as prone to the influences of emotion and the nuances of language as we mere mortals.

In a classic study from 1982, participants were asked to imagine they had lung cancer and were told they would be given a choice of two therapies: radiation or surgery.

They were then informed either (a) that 32% of patients were dead one year after radiation, or (b) that 68% of patients were alive one year after radiation. After this they were asked to hypothetically choose a treatment option.

About 44% of the people who were told the survival statistic chose radiation, compared to only 18% of those who were told the death statistic, even though the percentages reflected the same story about surviving radiation treatment.

What’s most intriguing here is that these kinds of results were similar even when research participants were doctors.

So what can we do?

By now, science-prioritising, reason-loving, evidence-revering readers might be feeling dazed, even a little afraid.

If we humans, who rely on emotional reactions to assess risks, can be confused even by simple numbers, and are easily influenced by oddities of language, what hope is there for making serious progress when trying to talk about huge risky issues such as climate change?

First, don’t knock emotion-driven, instinct-based risk responses: they’re useful. If you’re surfing and you notice a large shadow lurking under your board, it might be better to assume it’s a shark and act accordingly.

Yes it was probably your board’s shadow, and yes you’ll feel stupid for screaming and bolting for land. But better to assume it was a shark and be wrong, than assume it was your shadow and be wrong.

But emotion-driven reactions to large, long-term risks are less useful. When assessing these risks, we should resist our gut reactions and try not to be immediately driven by how a risk feels.

We should step back and take a moment to assess our own responses, give ourselves time to respond in a way that incorporates where the evidence leads us. It’s easy to forget that it’s not just our audiences – be they friends or family, colleagues or clients – who are geared to respond to risks like a human: it’s us as well.

With a bit of breathing space, we can try and see how the tricks and traps of risk perception and communication might be influencing our own judgement.

Perhaps you’ve logically linked flawed premises, or have been overly influenced by a specific word or turn of phrase. It could be your statistical brain has been overwhelmed by outrage, or you tried to process some numbers a little too quickly.

If nothing else, at least be wary of shouting “Everyone’s gotta love apples!” if you’re trying to communicate with a room full of orange enthusiasts. Talking at cross-purposes or simply slamming opposing perspectives on a risk is probably the best way to destroy any risk communication effort – well before these other quirks of being human even get a chance to mess it up.


Answer: Assume you start with $100. Options 1 and 2 leave you with $75, option 3 leaves you with your original $100. Note that no option puts you in a better position.

The ConversationRod Lamberts, Deputy Director, Australian National Centre for Public Awareness of Science, Australian National University

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

1 Comment

Filed under Reblogs

Should scientists engage with pseudo-science or anti-science?

The Conversation

Rod Lamberts, Australian National University and Will J Grant, Australian National University

The ABC’s flagship science journalism TV programme, Catalyst, has riled the scientific community once again. And, in a similar vein to Catalyst’s controversial 2013 report on the link between statins, cholesterol and heart disease, it has now turned its quasi-scientific attention to a supposed new peril.

Its “Wi-Fried?” segment last week raised concerns about the ever-increasing “electronic air pollution” that surrounds us in our daily lives, exploiting a number of age-old, fear-inspiring tropes.

There are already plenty of robust critiques of the arguments and evidence, so exploring where they got the science wrong is not our goal.

Instead, we’re interested in using the segment as inspiration to revisit an ongoing question about scientists’ engagement with the public: how should the scientific community respond to issues like this?

Should scientists dive in and engage head-on, appearing face-to-face with those they believe do science a disservice? Should they shun such engagement and redress bad science after the fact in other forums? Or should they disengage entirely and let the story run its course?

There are many of examples of what scientists could do, but to keep it simple we focus here just on the responses to “Wi-Fried” by two eminent Professors, Simon Chapman and Bernard Stewart, both of whom declined to be a part of the ABC segment, and use this case to consider what scientists should do.

Just say no

In an interview about their decision to not participate, Chapman and Stewart independently expressed concerns about the evidence, tone and balance in the “Wi-Fried” segment. According to Chapman it “contained many ‘simply wrong’ claims that would make viewers unnecessarily afraid”.

Stewart labelled the episode “scientifically bankrupt” and “without scientific merit”. He added:

I think the tone of the reporting was wrong, I think that the reporter did not fairly draw on both sides, and I use the word “sides” here reluctantly.

Indeed, in situations like this, many suggest that by appearing in the media alongside people who represent fringe thinkers and bad science, respected experts lend them unwarranted credibility and legitimacy.

Continuing with this logic, association with such a topic would mean implicitly endorsing poor science and bad reasoning, and contribute to an un-evidenced escalation of public fears.

But is it really that straightforward?

The concerns Chapman and Stewart expressed about the show could equally be used to argue that experts in their position should have agreed to be interviewed, if only to present a scientifically sound position to counter questionable claims.

In this line, you could easily argue it’s better for experts to appear whenever and wherever spurious claims are raised, the better to immediately refute and dismiss them.

On the other hand, if scientific experts refuse to engage with “scientifically bankrupt” arguments, this could send a more potent message: that the fringe claims are irrelevant, not even worth wasting the time to refute. So this would mean they shouldn’t engage with this kind of popular science story.

On the third hand, their refusal to engage could be re-framed to characterise the experts as remote, arrogant or even afraid, casting doubt on the veracity of the scientific position. So to avoid this impression, experts should engage.

But wait, there’s more.

Participation in these kinds of popular science shows could also tarnish the reputation of the expert. But not appearing means missing the opportunity to thwart the potential harm caused by fringe, false or non-scientific claims.

And what about an expert’s obligation to defend their science, to set the record straight, and to help ensure people are not mislead by poor evidence and shonky reasoning? Is this best done by engaging directly with dubious media offerings like “Wi-Fried”, or should relevant experts find other venues?

Should scientists engage anti-science?

Well, this depends on what they think they might achieve. And if one thing stands out in all the to-ing and fro-ing over what scientists should do in such cases, it’s this: the majority of proponents both for and against getting involved seem convinced that popular representations of science will change people’s behaviour.

But there is rarely any hard evidence presented in the myriad “scientists should” arguments out there. Sticking with the Catalyst example, there is really only one, far-from-convincing, study from 2013 suggesting the show has such influence.

If you really want to make a robust, evidence-based decision about what experts should do in these situations, don’t start with the science being discussed. In the case of Catalyst, you’d start with research on the show’s relationship with its audience(s).

  • What kinds of people watch Catalyst?
  • Why do they watch it?
  • To what extent are their attitudes influenced by the show?
  • If their attitudes are actually influenced, how long does this influence last?
  • If this influence does last, does it lead people to change their behaviours accordingly?

Of course, we applaud the motives of people who are driven to set the scientific record straight, and especially by those who are genuinely concerned about public welfare.

But to simply assume, without solid evidence, that programmes like Catalyst push people into harmful behaviour changes is misguided at best. At worst, it’s actually bad science.

The ConversationRod Lamberts, Deputy Director, Australian National Centre for Public Awareness of Science, Australian National University and Will J Grant, Researcher / Lecturer, Australian National Centre for the Public Awareness of Science, Australian National University

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

 

1 Comment

Filed under Reblogs