Tag Archives: risk assessment

Consent to risk fallacy

A common argument against counter-terrorism measures is that more people are killed each year by road accidents than by terrorists.  Whilst this statistic may be true, it is a false analogy and a red herring argument against counter-terrorism. It also ignores the fact that counter-terrorism deters and prevents more terrorist attacks than those that are eventually carried out.

This fallacious argument can be generalised as follows: ‘More people are killed by (fill-in-the-blank) than by terrorists, so why should we worry about terrorism?’  In recent media debates, the ‘blank’ has included not only road accidents, but also deaths from falling fridges and bathtub drownings.  However, for current purposes let us assume that more people do die from road accidents than would have died from either prevented or successful terrorist attacks.

Whenever we travel in a car, most people are aware that there is a small but finite risk of being injured or killed.  Yet this risk does not keep us away from cars.  We intuitively make an informal risk assessment that the level of this risk is acceptable in the circumstances.  In other words, we consent to take the risk of travelling in cars, because we decide that the low level of risk of an accident does not outweigh the benefits of car transport.

On the other hand, in western countries we do not consent to take the risk of being murdered by terrorists, unless we deliberately decide to visit a terrorist-prone area like Syria, northern Iraq or the southern Philippines.  A terrorist attack could occur anywhere in the West, so unlike the road accident analogy, there is no real choice a citizen can make to consent or not consent to the risk of a terrorist attack.

The Consent to risk fallacy omits this critical factor of choice from the equation, so the analogy between terrorism and road accidents is false.

 

14 Comments

Filed under Logical fallacies

Why don’t people get it? Seven ways that communicating risk can fail

The Conversation

Rod Lamberts, Australian National University

Many public conversations we have about science-related issues involve communicating risks: describing them, comparing them and trying to inspire action to avoid or mitigate them.

Just think about the ongoing stream of news and commentary on health, alternative energy, food security and climate change.

Good risk communication points out where we are doing hazardous things. It helps us better navigate crises. It also allows us to pre-empt and avoid danger and destruction.

But poor risk communication does the opposite. It creates confusion, helplessness and, worst of all, pushes us to actively work against each other even when it’s against our best interests to do so.

So what’s happening when risk communications go wrong?

People are just irrational and illogical

If you’re science-informed – or at least science-positive – you might confuse being rational with using objective, science-based evidence.

To think rationally is to base your thinking in reason or logic. But a conclusion that’s logical doesn’t have to be true. You can link flawed, false or unsubstantiated premises to come up with a logical-but-scientifically-unsubstantiated answer.

For example, in Australia a few summers back there was increase in the number of news reports of sharks attacking humans. This lead to some dramatic shark baiting and culling. The logic behind this reaction was something like:

  1. there have been more reports of shark attacks this year than before
  2. more reports means more shark attacks are happening
  3. more shark attacks happening means the risk of shark attack has increased
  4. we need to take new measures to keep sharks away from places humans swim to protect us from this increased risk.

You can understand the reasoning here, but it’s likely to have been based on flawed premises. Like not realising that one shark attack was not systematically linked to another (for example, some happened on different sides of the country). People here saw connections between events that probability suggests were actually random.

Prove it’s safe or we’ll say no

If people are already nervous about – or actively against – a risky proposition, one reaction is to demand proof of safety. But safety is a relative term and risk calculation doesn’t work that way.

To demand proof of safety is to demand certainty, and such a demand is scientifically impossible. Uncertainty is at the heart of the scientific method. Or rather, qualifying and communicating degrees of uncertainty is.

In reality, we live in a world where we have to agree on what constitutes acceptable risk, because we simply can’t provide proof of safety. To use an example I’ve noted before, we can’t prove orange juice is 100% safe, yet it remains defiantly on our supermarket shelves.

Don’t worry, this formula will calm your fears

You may have seen this basic risk calculation formula:

Risk (or hazard) = (the probability of something happening) × (the consequences of it happening)

This works brilliantly for insurance assessors and lab managers, but it quickly falls over when you use it to explain risk in the big bad world.

Everyday reactions to how bad a risk seems are more often ruled by the formula (hazard) × (outrage), where “outrage” is fuelled by non-technical, socially-driven matters.

Basically, the more outraged (horrified, frightened) we are by the idea of something happening, the more likely we are to consider it unacceptable, regardless of how statistically unlikely it might be.

The shark attack examples serves here, too. The consequences of being attacked by a shark are outrageous, and this horror colours our ability to keep the technical likelihood of an attack in perspective. The emotional reality of our feelings of outrage eclipse technical, detached risk calculations.

Significant means useful

Everyone who’s worked with statistics knows that statistical significance can be a confusing idea. For example, one study looked at potential links between taking aspirin everyday and the likelihood of having a heart attack.

Among the 22,000 people in the study, those who took daily aspirin were less likely to have a heart attack than those who didn’t, and the result was statistically significant.

Sounds like something worth paying attention to, until you discover that the difference in the likelihood of having a heart attack between those who were taking aspirin every day and those who weren’t was less than 1%.

Significance ain’t always significant.

Surely everyone understands percentages

It’s easy to appreciate that complex statistics and formulae aren’t the best tools for communicating risk beyond science-literate experts. But perhaps simple numbers – such as percentages – could help remove some of the confusion when talking about risk?

We see percentages everywhere – from store discounts, to weather forecasts telling you how likely it is to rain. But percentages can easily confuse, or at least slow people down.

Take this simple investment decision example. If you were offered a choice between the following three opportunities, which would you take?

  1. have your bank balance raised by 50% and then cut by 50%
  2. have your bank balance cut by 50% and then raised by 50%
  3. have your bank balance remain where it is

You probably got this right. But perhaps you didn’t. Or perhaps it took you longer than you’d expected to think it through. Don’t feel bad. (The answer is at the end of this article.)

I have used this in the classroom, and even science-literate university students can get it wrong, especially if they are asked to decide quickly.

Now imagine if these basic percentages were all you had to make a real, life-or-death decision (while under duress).

Just a few simple numbers could be helpful, couldn’t they?

Well actually, not always. Research into a phenomenon known as anchoring and adjustment shows that the mere presence of numbers can affect how likely or common we estimate something might be.

In this study, people were asked one of the following two questions:

  1. how many headaches do you have a month: 0, 1, 2?
  2. how many headaches do you have a month: 5, 10, 15?

Estimates were higher for responses to the second question, simply because the numbers used in the question to prompt their estimates were higher.

At least the experts are evidence-based and rational

Well, not necessarily. It turns out experts can be just as prone to the influences of emotion and the nuances of language as we mere mortals.

In a classic study from 1982, participants were asked to imagine they had lung cancer and were told they would be given a choice of two therapies: radiation or surgery.

They were then informed either (a) that 32% of patients were dead one year after radiation, or (b) that 68% of patients were alive one year after radiation. After this they were asked to hypothetically choose a treatment option.

About 44% of the people who were told the survival statistic chose radiation, compared to only 18% of those who were told the death statistic, even though the percentages reflected the same story about surviving radiation treatment.

What’s most intriguing here is that these kinds of results were similar even when research participants were doctors.

So what can we do?

By now, science-prioritising, reason-loving, evidence-revering readers might be feeling dazed, even a little afraid.

If we humans, who rely on emotional reactions to assess risks, can be confused even by simple numbers, and are easily influenced by oddities of language, what hope is there for making serious progress when trying to talk about huge risky issues such as climate change?

First, don’t knock emotion-driven, instinct-based risk responses: they’re useful. If you’re surfing and you notice a large shadow lurking under your board, it might be better to assume it’s a shark and act accordingly.

Yes it was probably your board’s shadow, and yes you’ll feel stupid for screaming and bolting for land. But better to assume it was a shark and be wrong, than assume it was your shadow and be wrong.

But emotion-driven reactions to large, long-term risks are less useful. When assessing these risks, we should resist our gut reactions and try not to be immediately driven by how a risk feels.

We should step back and take a moment to assess our own responses, give ourselves time to respond in a way that incorporates where the evidence leads us. It’s easy to forget that it’s not just our audiences – be they friends or family, colleagues or clients – who are geared to respond to risks like a human: it’s us as well.

With a bit of breathing space, we can try and see how the tricks and traps of risk perception and communication might be influencing our own judgement.

Perhaps you’ve logically linked flawed premises, or have been overly influenced by a specific word or turn of phrase. It could be your statistical brain has been overwhelmed by outrage, or you tried to process some numbers a little too quickly.

If nothing else, at least be wary of shouting “Everyone’s gotta love apples!” if you’re trying to communicate with a room full of orange enthusiasts. Talking at cross-purposes or simply slamming opposing perspectives on a risk is probably the best way to destroy any risk communication effort – well before these other quirks of being human even get a chance to mess it up.


Answer: Assume you start with $100. Options 1 and 2 leave you with $75, option 3 leaves you with your original $100. Note that no option puts you in a better position.

The ConversationRod Lamberts, Deputy Director, Australian National Centre for Public Awareness of Science, Australian National University

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

1 Comment

Filed under Reblogs

The Fallacy of Faulty Risk Assessment

by Tim Harding

(An edited version of this essay was published in The Skeptic magazine, September 2016, Vol 36 No 3)

Australian Skeptics have tackled many false beliefs over the years, often in co-operation with other organisations.  We have had some successes – for instance, belief in homeopathy finally seems to be on the wane.  Nevertheless, false beliefs about vaccination and fluoridation just won’t lie down and die – despite concerted campaigns by medical practitioners, dentists, governments and more recently the media.  Why are these beliefs so immune to evidence and arguments?

There are several possible explanations for the persistence of these false beliefs.  One is denialism – the rejection of established facts in favour of personal opinions.  Closely related are conspiracy theories, which typically allege that facts have been suppressed or fabricated by ‘the powers that be’, in an attempt by denialists to explain the discrepancies between their opinions and the findings of science.  A third possibility is an error of reasoning or fallacy known as Faulty Risk Assessment, which is the topic of this article.

Before going on to discuss vaccination and fluoridation in terms of this fallacy, I would like to talk about risk and risk assessment in general.

What is risk assessment?

Hardly anything we do in life is risk-free. Whenever we travel in a car or even walk along a footpath, most people are aware that there is a small but finite risk of being injured or killed.  Yet this risk does not keep us away from roads.  We intuitively make an informal risk assessment that the level of this risk is acceptable in the circumstances.

In more formal terms, ‘risk’ may be defined as the probability or likelihood of something bad happening multiplied by the resulting cost/benefit ratio if it does happen.  Risk analysis is the process of discovering what risks are associated with a particular hazard, including the mechanisms that cause the hazard, then estimating the likelihood that the hazard will occur and the consequences if it does occur.

Risk assessment is the determination of the acceptability of risk using two dimensions of measurement – the likelihood of an adverse event occurring; and the severity of the consequences if it does occur, as illustrated in the diagram below.  (This two-dimensional risk assessment is a conceptually useful way of ranking risks, even if one or both of the dimensions cannot be measured quantitatively).

risk-diagram

By way of illustration, the likelihood of something bad happening could be very low, but the consequences could be unacceptably high – enough to justify preventative action.  Conversely, the likelihood of an event could be higher, but the consequences could low enough to justify ‘taking the risk’.

In assessing the consequences, consideration needs to be given to the size of the population likely to be affected, and the severity of the impact on those affected.  This will provide an indication of the aggregate effect of an adverse event.  For example, ‘high’ consequences might include significant harm to a small group of affected individuals, or moderate harm to a large number of individuals.

A fallacy is committed when a person either focuses on the risks of an activity and ignores its benefits; and/or takes account one dimension of risk assessment and overlooks the other dimension.

To give a practical example of a one-dimensional risk assessment, the desalination plant to augment Melbourne’s water supply has been called a ‘white elephant’ by some people, because it has not been needed since the last drought broke in March 2010.  But this criticism ignores the catastrophic consequences that could have occurred had the drought not broken.  In June 2009, Melbourne’s water storages fell to 25.5% of capacity, the lowest level since the huge Thomson Dam began filling in 1984.  This downward trend could have continued at that time, and could well be repeated during the inevitable next drought.

wonthaggi

Melbourne’s desalination plant at Wonthaggi

No responsible government could afford to ‘take the risk’ of a major city of more than four million people running out of water.  People in temperate climates can survive without electricity or gas, but are likely to die of thirst in less than a week without water, not to mention the hygiene crisis that would occur without washing or toilet flushing.  The failure to safeguard the water supply of a major city is one of the most serious derelictions of government responsibility imaginable.

Turning now to the anti-vaccination and anti-fluoridation movements, they both commit the fallacy of Faulty Risk Assessment.  They focus on the very tiny likelihood of adverse side effects without considering the major benefits to public health from vaccination and the fluoridation of public water supplies, and the potentially severe consequences of not vaccinating or fluoridating.

Vaccination risks

The benefits of vaccination far outweigh its risks for all of the diseases where vaccines are available.  This includes influenza, pertussis (whooping cough), measles and tetanus – not to mention the terrible diseases that vaccination has eradicated from Australia such as smallpox, polio, diphtheria and tuberculosis.

As fellow skeptic Dr. Rachael Dunlop puts it:  ‘In many ways, vaccines are a victim of their own success, leading us to forget just how debilitating preventable diseases can be – not seeing kids in calipers or hospital wards full of iron lungs means we forget just how serious these diseases can be.’

No adult or teenager has ever died or become seriously ill in Australia from the side effects of vaccination; yet large numbers of people have died from the lack of vaccination.  The notorious Wakefield allegation in 1998 of a link between vaccination and autism has been discredited, retracted and found to be fraudulent.  Further evidence comes from a recently published exhaustive review examining 12,000 research articles covering eight different vaccines which also concluded there is no link between vaccines and autism.

According to Professor C Raina MacIntyre of UNSW, ‘Influenza virus is a serious infection, which causes 1,500 to 3,500 deaths in Australia each year.  Death occurs from direct viral effects (such as viral pneumonia) or from complications such as bacterial pneumonia and other secondary bacterial infections. In people with underlying coronary artery disease, influenza may also precipitate heart attacks, which flu vaccine may prevent.’

In 2010, increased rates of high fever and febrile convulsions were reported in children under 5 years of age after they were vaccinated with the Fluvax vaccine.  This vaccine has not been registered for use in this age group since late 2010 and therefore should not be given to children under 5 years of age. The available data indicate that there is a very low risk of fever, which is usually mild and transient, following vaccination with the other vaccine brands.  Any of these other vaccines can be used in children aged 6 months and older.

Australia was declared measles-free in 2005 by the World Health Organization (WHO) – before we stopped being so vigilant about vaccinating and outbreaks began to reappear.  The impact of vaccine complacency can be observed in the 2015 measles epidemic in Wales where there were over 800 cases and one death, and many people presenting were of the age who missed out on MMR vaccination following the Wakefield scare.

After the link to autism was disproven, many anti-vaxers shifted the blame to thiomersal, a mercury-containing component of relatively low toxicity to humans.  Small amounts of thiomersal were used as a preservative in some vaccines, but not the MMR vaccine.  Thiomersal was removed from all scheduled childhood vaccines in 2000.

In terms of risk assessment, Dr. Dunlop has pointed out that no vaccine is 100% effective and vaccines are not an absolute guarantee against infection. So while it’s still possible to get the disease you’ve been vaccinated against, disease severity and duration will be reduced.  Those who are vaccinated have fewer complications than people who aren’t.  With pertussis (whooping cough), for example, severe complications such as pneumonia and encephalitis (brain inflammation) occur almost exclusively in the unvaccinated.  So since the majority of the population is vaccinated, it follows that most people who get a particular disease will be vaccinated, but critically, they will suffer fewer complications and long-term effects than those who are completely unprotected.

Fluoridation risks

Public water fluoridation is the adjustment of the natural levels of fluoride in drinking water to a level that helps protect teeth against decay.  In many (but not all) parts of Australia, reticulated drinking water has been fluoridated since the early 1960s.

The benefits of fluoridation are well documented.  In November 2007, the NHMRC completed a review of the latest scientific evidence in relation to fluoride and health.  Based on this review, the NHMRC recommended community water fluoridation programs as the most effective and socially equitable community measure for protecting the population from tooth decay.  The scientific and medical support for the benefits of fluoridation certainly outweighs the claims of the vocal minority against it.

Fluoridation opponents over the years have claimed that putting fluoride in water causes health problems, is too expensive and is a form of mass medication.  Some conspiracy theorists go as far as to suggest that fluoridation is a communist plot to lower children’s IQ.  Yet, there is no evidence of any adverse health effects from the fluoridation of water at the recommended levels.  The only possible risk is from over-dosing water supplies as a result of automated equipment failure, but there is inline testing of fluoride levels with automated water shutoffs in the remote event of overdosing.  Any overdose would need to be massive to have any adverse effect on health.  The probability of such a massive overdose is extremely low.

Tooth decay remains a significant problem. In Victoria, for instance, more than 4,400 children under 10, including 197 two-year-olds and 828 four-year-olds, required general anaesthetic in hospital for the treatment of dental decay during 2009-10.  Indeed, 95% of all preventable dental admissions to hospital for children up to nine years old in Victoria are due to dental decay. Children under ten in non-optimally fluoridated areas are twice as likely to require a general anaesthetic for treatment of dental decay as children in optimally fluoridated areas.

As fellow skeptic and pain management specialist Dr. Michael Vagg has said, “The risks of general anaesthesia for multiple tooth extractions are not to be idly contemplated for children, and far outweigh the virtually non-existent risk from fluoridation.”  So in terms of risk assessment, the risks from not fluoridating water supplies are far greater than the risks of fluoridating.

Implications for skeptical activism

Anti-vaxers and anti-fluoridationists who are motivated by denialism and conspiracy theories tend to believe whatever they want to believe, and dogmatically so.  Thus evidence and arguments are unlikely to have much influence on them.

But not all anti-vaxxers and anti-fluoridationists fall into this category.  Some may have been misled by false information, and thus could possibly be open to persuasion if the correct information is provided.

Others might even be aware of the correct information, but are assessing the risks fallaciously in the ways I have described in this article.  Their errors are not ones of fact, but errors of reasoning.  They too might be open to persuasion if education about sound risk assessment is provided.

I hope that analysing the false beliefs about vaccination and fluoridation from the perspective of the Faulty Risk Assessment Fallacy has provided yet another weapon in the skeptical armoury against these false beliefs.

References

Rachael Dunlop (2015) Six myths about vaccination – and why they’re wrong. The Conversation, Parkville.

C Raina MacIntyre (2016) Thinking about getting the 2016 flu vaccine? Here’s what you need to know. The Conversation, Parkville.

Mike Morgan (2012) How fluoride in water helps prevent tooth decay.  The Conversation, Parkville.

Michael Vagg (2013) Fluoride conspiracies + activism = harm to children. The Conversation, Parkville.

 Government of Victoria (2014) Victorian Guide to Regulation. Department of Treasury and Finance, Melbourne.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

Leave a comment

Filed under Essays and talks

Mobile phone health alarmists bereft of credible arguments

The Conversation
Simon Chapman, University of Sydney

In May this year, I led a paper published in Cancer Epidemiology, which looked at the incidence of brain cancer in Australia between 1982 and 2012.

The first mobile phone call was made in Australia in 1987 and today their use is all but universal.

Cancer is a notifiable disease: all newly diagnosed cases are gathered from doctors by state cancer registries and nationally aggregated by the Australian Institute of Health and Welfare in publicly available data.

I summarised our study in this column, which to date has had more than 44,700 readers.


New study: no increase in brain cancer across 29 years of mobile use in Australia


We found that with extremely high proportions of the population having used mobile phones across some 20-plus years (from about 9% in 1993 to about 90% today), age-adjusted brain cancer rates have flatlined over nearly 30 years.

There were significant increases in brain cancer incidence only in those aged 70 years or more. But the increase in this age group began from 1982, before the introduction of mobile phones in 1987 and so cannot be explained by it.

The most likely explanation of the rise in this older age group was improved diagnosis that happened with the introduction of imaging machines that (for example) could more accurately diagnose some strokes as brain cancers.

In the days and weeks after publication, our paper received massive global news and social media attention, achieving an Altmetric score of 835. On the basis of the most media-covered research in all fields in 2015, this would have put it just outside the 100 highest Altmetric scores if we’d published it last year (2016 figures are published early next year).

It also drew the ire of the close-knit international network of mobile phone and wifi alarmists, who are utterly convinced mobile phones are deadly and won’t hear otherwise. Their opening salvo was to accuse me of being an undeclared phone industry stooge.

In 1997 I had been given a small grant by AMTA, the Australian Mobile Telephone Association, to conduct a national survey of how many mobile phone users had ever used their phone to call emergency services such as ambulance, police and fire. Large proportions of people had done so, probably saving many lives by alerting these services far more quickly than when having to find a landline.

I didn’t report this because I got the one-off grant 19 years ago, and all reputable journals and research agencies rule that competing interests are not lifetime but extinguish typically between one and three years after such support has expired. The grant also had nothing to do with cancer.

I also got a series of mostly verbally incontinent email. One from an excitable correspondent in Swaziland, insisted that I answer his many eureka moment insights into why what we had published was wrong in every respect. We should withdraw our paper, he demanded and tell the world we were wrong.

Predictably, several wrote to Cancer Epidemiology, setting out a litany of our egregious errors and failures to understand that an epidemic of brain cancer, comparable to the deluge of smoking-caused cancers, was just around the corner. Three of these were published this week with our response (open access until October 20, 2016).

The three letters were written by five individuals, three of whom are affiliated with a non-accredited Environmental Health Trust, headed by Dr Devra Davis, the alarmist doomsayer who featured in the much-criticised ABC Catalyst program which has now been withdrawn.

Assuming they got their heads together to rain blows on our heretical findings, it was amusing to see the barely audible blanks they decided to fire.

Their main arguments were:

‘It’s too soon to see an epidemic of brain cancer’

One argued several decades of widespread phone use were needed before increases in cancer might be seen. She seemed intent on diminishing the number of years that large numbers of Australians have used mobile phones, in order to preserve her argument. She argued that only the last nine years of data since 2001 when mobile subscriptions reached 50% of the population ought to be considered in any analysis. And nine years was not nearly enough.

But by 1996, some 20% of Australian adults (some 2.9 million) were using mobile phones. Apparently we ought to have joined her in seeing this as a trivial exposed population, unworthy of consideration. Quite obviously, there’s no alleged carcinogen where 20% of the population is exposed where any credible scientist would seriously maintain such widespread exposure should be ignored in assessing population attributable risk.

Further, in one of the studies cited in a review published by our critics, excess risks of brain cancer from mobile phone use are argued as occurring following exposures of as little as between five and ten years of mobile phone use. These critics even suggested in the same paper that the international INTERPHONE study may suggest a cancer “promotion effect”, with use as few as one to four years being dangerous.

We concluded that:

This therefore looks like an argument trying to walk on both sides of the street: if a short latency period show excess risks they are deemed to be credible, while if they show no excess (as with our study) they are to be dismissed.

‘Various case-control studies show evidence of increased risk’

Case-control studies in this field have been criticised because they rely on users’ recall of the extent of phone use going back many years. Just try recall your own mobile phone use in, for example, 2003 and you will immediately understand how data obtained this way are hugely problematic.

Moreover, people with brain cancer often have memory loss. And if you have brain cancer, are part of a study considering its cause, and have been exposed to frequent claims about the hypothesis that mobile phone use causing brain cancer, the likelihood of recall bias resulting in recall of high mobile phone use is probably going to increase.

The strength of our study was the ability to look at all cases of brain cancer in Australia in the 29 years since the first call was made here. The inconvenient fact for the alarmists is that there has been no significant increase in brain cancer in either men or women compatible with the mobile phone hypothesis.

‘Decreased use of X-rays is masking an increase in cancers caused by mobiles’

Perhaps the silliest argument thrown at us was an unreferenced hypothesis that “discontinued or reduced use of established carcinogens such as X-rays” may have reduced the incidence of brain cancer from such exposures while, simultaneously, the rise of mobile phone use would have replaced those cases, thereby explaining the largely flat line incidence across our data period.

This hypothesis would need to account for how reductions in a very uncommon radiation exposure (full head X-rays) could ever possibly produce the exact same decreased incidence of brain cancer that they claim arise in daily exposure to an alleged carcinogen by most of the entire population would add to that incidence.

Our Swaziland critic finished one of his missives writing that “it behooves you, as a scientist, to take note of fatal errors in your work.” It would “behoove” mobile phone alarmists to stop unnecessarily alarming people with their weak arguments.

The ConversationSimon Chapman, Emeritus Professor in Public Health, University of Sydney

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.
 

Leave a comment

Filed under Reblogs

Why we need a new science of safety

The Conversation

Philip Thomas, University of Bristol

It is often said that our approach to health and safety has gone mad. But the truth is that it needs to go scientific. Managing risk is ultimately linked to questions of engineering and economics. Can something be made safer? How much will that safety cost? Is it worth that cost?

Decisions under uncertainty can be explained using utility, a concept introduced by Swiss mathematician Daniel Bernoulli 300 years ago, to measure the amount of reward received by an individual. But the element of risk will still be there. And where there is risk, there is risk aversion.

Risk aversion itself is a complex phenomenon, as illustrated by psychologist John W. Atkinson’s 1950s experiment, in which five-year-old children played a game of throwing wooden hoops around pegs, with rewards based on successful throws and the varying distances the children chose to stand from the pegs.

The risk-confident stood a challenging but realistic distance away, but the risk averse children fell into two camps. Either they stood so close to the peg that success was almost guaranteed or, more perplexingly, positioned themselves so far away that failure was almost certain. Thus some risk averse children were choosing to increase, not decrease, their chance of failure.

So clearly high aversion to risk can induce some strange effects. These might be unsafe in the real world, as testified by author Robert Kelsey, who said that during his time as a City trader, “bad fear” in the financial world led to either “paralysis… or nonsensical leaps”. Utility theory predicts a similar effect, akin to panic, in a large organisation if the decision maker’s aversion to risk gets too high. At some point it is not possible to distinguish the benefits of implementing a protection system from those of doing nothing at all.

So when it comes to human lives, how much money should we spend on making them safe? Some people prefer not to think about the question, but those responsible for industrial safety or health services do not have that luxury. They have to ask themselves the question: what benefit is conferred when a safety measure “saves” a person’s life?

The answer is that the saved person is simply left to pursue their life as normal, so the actual benefit is the restoration of that person’s future existence. Since we cannot know how long any particular person is going to live, we do the next best thing and use measured historical averages, as published annually by the Office of National Statistics. The gain in life expectancy that the safety measure brings about can be weighed against the cost of that safety measure using the Judgement value, which mediates the balance using risk-aversion.

The Judgement (J) value is the ratio of the actual expenditure to the maximum reasonable expenditure. A J-value of two suggests that twice as much is being spent as is reasonably justified, while a J-value of 0.5 implies that safety spend could be doubled and still be acceptable. It is a ratio that throws some past safety decisions into sharp relief.

For example, a few years ago energy firm BNFL authorised a nuclear clean-up plant with a J-value of over 100, while at roughly the same time the medical quango NICE was asked to review the economic case for three breast cancer drugs found to have J-values of less than 0.05.

Risky business. shutterstock

The Government of the time seemed happy to sanction spending on a plant that might just prevent a cancer, but wanted to think long and hard about helping many women actually suffering from the disease. A new and objective science of safety is clearly needed to provide the level playing field that has so far proved elusive.

Putting a price on life

Current safety methods are based on the “value of a prevented fatality” or VPF. It is the maximum amount of money considered reasonable to pay for a safety measure that will reduce by one the expected number of preventable premature deaths in a large population. In 2010, that value was calculated at £1.65m.

This figure simplistically applies equally to a 20-year-old and a 90-year-old, and is in widespread use in the road, rail, nuclear and chemical industries. Some (myself included) argue that the method used to reach this figure is fundamentally flawed.

In the modern industrial world, however, we are all exposed to dangers at work and at home, on the move and at rest. We need to feel safe, and this comes at a cost. The problems and confusions associated with current methods reinforce the urgent need to develop a new science of safety. Not to do so would be too much of a risk.

The ConversationPhilip Thomas, Professor of Risk Management, University of Bristol

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

Leave a comment

Filed under Reblogs

Melbourne’s desalination plant is just one part of drought-proofing water supply

The Conversation

Stephen Gray, Victoria University

Water has now been ordered from the Victorian desalination plant. The plant was built at the end of the millennium drought to provide security against drought. But once built, it then rained and rained.

Since then many have seen the desalination plant as a white elephant – an unnecessary expense that has burdened Victoria with debt. Indeed, it seems to have been demonised as something evil.

However, with dry weather over the past two years, water storages have begun to decline, both in Melbourne and particularly in regional Victoria. The desalination plant was built to be used in times of water shortages, and the Victorian government has now deemed it time to order water.

The order is being made to reduce the possibility of water restrictions in Melbourne and to negate the need for Melbourne to take water from the Goulburn system and so allow more water to be made available to localities such as Bendigo and Ballarat.

The cost of drought

Desalination membranes at Melbourne’s desalination plant.
Stephen Gray

It has now been six years since the millennium drought ended and it can be hard for city residents to remember the impact of drought on their lives.

To give one example, we conducted a study on the social impact of water restrictions on sportsgrounds during the drought. This study found that 70% of people used sports grounds, either for organised sport or informal relaxation, and that all users were adversely affected by the drought.

The most severely affected were those at women’s, disabled and junior sporting clubs, which were of low priority for irrigation. These groups were forced either to cancel their activities because of the hard playing surfaces or to reschedule their events and find other locations to play their sport. This became a major disruption to the lives of many people during the 13-year drought.

This was just one way that water restrictions and drought affected our lives. When water storage levels were etched in our minds through public billboards and television weather reports, neighbours were asked to report people who used water contrary to the restrictions, car washes became a growth industry, and communities were parched and brown. I am now enjoying my garden, which has sprung to life in recent years, and will be happy if we can avoid such water restrictions again.

For regional Victorians the impact of drought was greater. For them, the reminders of drought are already to the fore following several dry seasons. Parts of Victoria have received less than 50% of average annual rainfall for the past two years. Farmers are reducing the number of cattle on the land.

Water from the desalination plant will be delivered to regional Victoria via the state’s rivers and pipe networks that make up the water grid.

Alternatives?

Some may argue that other sources of water such as dams, storm water harvesting and water recycling would have been better alternatives. However, all require significant investment and none are likely to be fully utilised during wetter periods. This has been one criticism of desalination, but is simply an outcome of reducing risk in a variable climate.

Storm water harvesting is often promoted as being a cheaper alternative to desalination, but a recent water industry article on the cost of such harvesting has estimated costs of A$10-25 per kilolitre (1,000 litres) when used as a substitute for drinking water. This compares to costs of A$2-3 per kL for desalinated water. Turning the desalination plant on adds up to an extra A$12 a year on Victorian water bills.

Perhaps one alternative that is worth considering is recycling waste water for drinking water supplies. This is currently against Victorian government policy, which the two major parties support.

Victoria uses recycled water for irrigation, in toilets and clothes washing, but such schemes require homes to receive water through a second pipe. These second-pipe systems are more costly to build and manage.

Recycling water for drinking avoids these costs, as you simply use the same pipe that supplies water from dams. The technology to recycle water for drinking is also well established and can be delivered to existing homes.

Given Melbourne has access to desalinated water that we are only just starting to use, it is unlikely Melbourne will need to consider recycling water for drinking in the near future. However, regional communities may like to have this option, and I believe this option should be allowed.

Australia’s rainfall patterns are among the most variable in the world, and prolonged periods of dry weather are normal. However, climate change predictions indicate longer, more severe periods of dry weather.

Indeed, one of my climate change colleague has suggested that climate change does not occur in a constant, slow progression, but rather through step changes. If this is the case, then the next drought will be more severe and the need for climate-independent water supplies more pressing.

Faced with this scenario, the desalination plant is a good investment and we should use it when it is needed.

The ConversationStephen Gray, Director of the Institute for Sustainability and Innovation , Victoria University

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

 

1 Comment

Filed under Reblogs

Climate change risk assessment


 

2 Comments

Filed under Videos

False equivalence

In some ways the False equivalence fallacy is the direct opposite of a False dilemma.

False equivalence is an informal fallacy that describes a situation where there is an apparent similarity between two things, but in fact they are not equivalent.  The two things may share some common characteristics, but they have important differences that are overlooked for the purposes of the argument.

The pattern of the fallacy often looks like this: if A has characteristics c and d, and B has characteristics d and e, then since they both have characteristic d, A and B are equivalent. In practice, often only a passing similarity is required between A and B for this fallacy to be committed.

The following statements are examples of false equivalence:

‘They’re both soft, cuddly pets. There’s no difference between a cat and a dog.’

‘We all bleed red. We’re all no different from each other.’

‘Hitler, Stalin and Mao were evil atheists; therefore all atheists are evil.’

A more complex example is where somebody claims that more Australians are killed by sharks or road accidents than by terrorism, therefore we should not do anything to stop terrorism. This example ignores the fact that terrorist acts are prevented by doing something, such as surveillance and intelligence.  We also choose to take the risks of swimming in the ocean and driving in cars, but we cannot avoid the risk of terrorism no matter what we do.

False equivalence is occasionally claimed in politics, where one political party will accuse their opponents of having performed equally wrong actions, usually as a red herring in an attempt to deflect criticism of their own behaviour. Two wrongs don’t make a right.

On the other hand, politicians might accuse journalists of False equivalence in their reporting of political controversies if the stories are perceived to assign equal blame to opposing parties.  However, False equivalence should not be confused with False balance – the media phenomenon of presenting two sides of an argument equally in disregard of the merit or evidence on a subject (a form of argument to moderation).

Moral equivalence is a special case of False equivalence where it is falsely claimed, often for ideological motives, that both sides are equally to blame for a war or other international conflict. The historical evidence shows that this is rarely the case.

Another special case of False equivalence is Political correctness, which may be defined as language, ideas, policies, or behavior that seeks to minimise social offence in relation to occupational, gender, racial, cultural, sexual orientation, certain other religions, beliefs or ideologies, disability, and age-related contexts, to an excessive extent thus inhibiting free speech.

Leave a comment

Filed under Logical fallacies

Faulty risk assessment

by Tim Harding B.Sc

Risk’ may be defined as the probability of something bad happening multiplied by the resulting cost/benefit if it does happen.  Risk analysis is the process of discovering what risks are associated with a particular hazard, including the mechanisms that cause the hazard, then estimating the probability that the hazard will occur and its consequences.

Risk assessment is the determination of the acceptability of risk in two dimensions – the likelihood of an adverse event occurring; and the severity of the consequences if it does occur,[1] as illustrated in the diagram below.

risk assessment diagram

By way of illustration, the likelihood of something bad happening could be very low, but the consequences could be unacceptably high – enough to justify preventative action.  Conversely, the likelihood of an event could be higher, but the consequences could low enough to justify ‘taking the risk’.

In assessing the consequences, consideration needs to be given to the size of the population likely to be affected, and the severity of the impact on those affected.  This will provide an indication of the aggregate effect of an adverse event. For example, ‘major’ consequences might include significant harm to a small group of affected individuals, or moderate harm to a large number of individuals.[2]

A fallacy is committed when a person focuses on risks in isolation from benefits, or takes into account one dimension of risk assessment without the other dimension.  To give a practical example, the new desalination plant to augment Melbourne’s water supply has been called a ‘white elephant’ by some people, because it has not been needed since the last drought broke. But this criticism ignores the catastrophic consequences that could have occurred had the drought not broken. In June 2009, Melbourne’s water storages fell to 25.5% of capacity, the lowest level since the huge Thomson Dam began filling in 1984. This downward trend could have continued at that time, and could well be repeated during the next drought.

247138-d17bfa2a-b8bd-11e3-8a33-c23c348170ff

Melbourne’s desalination plant at Wonthaggi

No responsible government could afford to ‘take the risk’ of a major city of 4 million people running out of water.  People in temperate climates can survive without electricity or gas, but are likely to die of thirst in less than a week without water, not to mention the hygiene crisis that would occur without washing or toilet flushing.  The failure to safeguard the water supply of a major city is one of the most serious derelictions of government responsibility imaginable.

A similar example of fallacious reasoning is in the area of climate change, where the public debate wrongly focusses on whether the science is true or false, rather than on the risks and consequences of it being true or false. This video explains the fallacy quite well.

Other examples of this fallacy are committed by the anti-vaccination and anti-fluoridation movements, often accompanied by conspiracy theories.  They both focus on the very tiny likelihood of adverse side effects without considering the major benefits to public health from the vaccination of children and the fluoridation public water supplies.  No adult or teenager has ever died or become seriously ill in Australia from the side effects of vaccination or fluoridation [3]; yet large numbers of people have died from the lack of vaccination.[4] The allegation of a link between vaccination and autism has been discredited, retracted and found to be fraudulent.  The benefits of fluoridation are well documented. The risks of general anaesthesia for multiple tooth extractions are not to be idly contemplated for children, and far outweigh the virtually nonexistent risk from fluoridation.[5]


[1] This is based on the Australian/New Zealand Standard for Risk Management.

[2] State Government of Victoria (2007) Victorian Guide to Regulation 2nd edition. Department of Treasury and Finance, Melbourne.

[3] In 2010, increased rates of high fever and febrile convulsions were reported in children under 5 years of age after they were vaccinated with the bioCSL Fluvax® vaccine. bioCSL Fluvax® has not been registered for use in this age group since late 2010 and therefore should not be given to children under 5 years of age. The available data indicate that there is a very low risk of fever, which is usually mild and transient, following vaccination with the other vaccine brands: Agrippal®; Fluarix®; Influvac®; and Vaxigrip®.  Any of these vaccines can be used in children aged 6 months and older. This and further information on flu vaccination is available here.

[4] The former Commonwealth Chief Medical Officer, Prof. Jim Bishop has argued that the flu vaccination program “changed dramatically the flu outlook for this country”, with admissions to intensive care from swine flu falling from 681 in 2009 to just 60 in 2010, and hospitalisations dropping from nearly 5000 to 600. Swine flu killed 191 Australians in 2009 and 36 in 2010. In contrast, seasonal flu killed 1796 Australians that year – but, unlike swine flu, the victims were mainly the frail and elderly. Prof. Bishop cautioned that one in every three hospital patients were “perfectly fit and well” before they caught swine flu, which was severe in pregnant women, teenagers who had lost their innate childhood immunity and indigenous people who tend to suffer underlying health problems. Three pregnant women died of swine flu, and 280 ended up in intensive care.  

[5] https://theconversation.com/fluoride-conspiracies-activism-harm-to-children-17723

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

3 Comments

Filed under Logical fallacies