Tag Archives: probability

Explainer: how our understanding of risk is changing

The Conversation

Robert Hoffmann, RMIT University and Adrian R. Camilleri, RMIT University

Under the traditional notion of risk, people react predictably based on how risk-tolerant they are. Here, risk is calculated by combining the probability of something occurring (such as rolling a six with a pair of dice) with the value of the outcome (how much you have wagered).

But our understanding of risk is changing. We now know that a whole host of factors, from your personal history to your mood and age, all come to bear on how you perceive and take on risk.

For example, it is well known that investing in relatively high-risk stocks yields much higher returns than investing in relatively low-risk bonds. In the last 100 years US stocks have produced an 8% average annual real return compared to 1% return for more riskless securities.

But investors favour the safer securities despite the lower returns. This is called the equity premium puzzle. Put simply, more people should be investing in stocks.


Read more: Financial gamble? My brain made me do it


One influential theory suggests that people overestimate small probabilities and hate losses more than equivalent gains. This means that rare events such as winning a lottery are treated as much more likely than they really are.

It also means that people won’t participate in a coin flip where they win $10 for heads but lose $10 for tails. Most people will only play if they win $20 on heads and lose $10 for tails.

What influences risk

Research now shows that a whole range of factors influence risk-taking and perceptions. Sleep deprivation, for example, increases risk-taking as it depletes the ability to think logically.

Life experience, such as having lived through a recession or boom, also has an impact. For instance, those who lived through a recession are more pessimistic about future stock market returns and are less likely to own stocks.

This is because people make decisions based on their (limited) experiences rather than on objective analyses of risk and return, the present and future.

Other research suggests that people often rely on simple rules of thumb when choosing between options with uncertain outcomes. When it comes to investing, this means that people may simply choose the highest possible gain or smallest possible loss, ignoring probability altogether.

Your emotional state also affects risk perception, as does how you found out about the risk (did your friend tell you or did you find out from the news?). Researchers have even found the source of the risk is important to our perceptions. For instance, people perceive risk differently if the threat is coming from another person (in the form of humiliation or disapproval etc) rather than an impersonal force.

There are also cultural differences in risk attitudes. For example, in one study, Chinese people were found to be more risk-tolerant than Americans, contrary to what both groups believed. Some researchers have even constructed a world map based on risk attitudes.

Finally, people tend to perceive risks along two emotional dimensions. “Dread risk” is associated with a lack of control and consequences that are likely to be catastrophic or fatal. “Unknown risk” relates to new, involuntary activities that are poorly understood.

As you can see in the following chart, terrorism is high on both dimensions, guns are high only on dread risk, and genetically modified foods are high only on unknown risk.

Unknown vs Dread Risks. Author provided.

Recent behavioural economics thinking helps us to explain the equity premium puzzle: investors hate losses more than equivalent gains and tend to take too short-term a view of their investments. As a result, investors over-react to short-term losses and refuse to invest in high-risk stocks unless these have an extremely high return potential.

In a striking demonstration of this phenomenon, researchers recently carried out a 14-day experiment in which investors participated in a beta test of a trading platform. Half the investors had access to second-by-second changes in price and portfolio value. The other half could only see these changes every four hours.

The key finding was that investors with infrequent price information invested around 33% more in risky assets than investors with frequent information. As a result, those with infrequent information earned 53% higher profits.

Traditional economics suggests that more information is a good thing. These results reveal that more information actually hurts investment performance. This is because more information allows investors more opportunity to over-react to short-term losses.

What this all means

Our new understanding of risk has implications for our understanding of such varied contexts as housing markets, labor markets, customer reactions to services, choice between medical treatments, and even winners of US presidential elections.

It also turns out that someone who takes risks looks remarkably similar to a stereotypical Wall Street trader – young, smart, male, affluent and educated.

Funnily enough, even physical features like height and the ratio of your second and fourth fingers are predictive of risk-taking, because of the underlying genetic and hormonal factors that affect risk attitudes.

The ConversationInsurance companies increasingly profile people using behavioural insights to work out premiums that reflect policy risks. One day finger ratio scans and personality tests may become a standard part of policy application forms.

Robert Hoffmann, Professor of Economics, RMIT University and Adrian R. Camilleri, Lecturer in Marketing, RMIT University

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

Leave a comment

Filed under Reblogs

Paradoxes of probability and other statistical strangeness

The Conversation

Image 20170329 14155 f440dl
Statistics and probability can sometimes yield mind bending results. Shutterstock

Stephen Woodcock, University of Technology Sydney

Statistics is a useful tool for understanding the patterns in the world around us. But our intuition often lets us down when it comes to interpreting those patterns. In this series we look at some of the common mistakes we make and how to avoid them when thinking about statistics, probability and risk. The Conversation


You don’t have to wait long to see a headline proclaiming that some food or behaviour is associated with either an increased or a decreased health risk, or often both. How can it be that seemingly rigorous scientific studies can produce opposite conclusions?

Nowadays, researchers can access a wealth of software packages that can readily analyse data and output the results of complex statistical tests. While these are powerful resources, they also open the door to people without a full statistical understanding to misunderstand some of the subtleties within a dataset and to draw wildly incorrect conclusions.

Here are a few common statistical fallacies and paradoxes and how they can lead to results that are counterintuitive and, in many cases, simply wrong.


Simpson’s paradox

What is it?

This is where trends that appear within different groups disappear when data for those groups are combined. When this happens, the overall trend might even appear to be the opposite of the trends in each group.

One example of this paradox is where a treatment can be detrimental in all groups of patients, yet can appear beneficial overall once the groups are combined.

How does it happen?

This can happen when the sizes of the groups are uneven. A trial with careless (or unscrupulous) selection of the numbers of patients could conclude that a harmful treatment appears beneficial.

Example

Consider the following double blind trial of a proposed medical treatment. A group of 120 patients (split into subgroups of sizes 10, 20, 30 and 60) receive the treatment, and 120 patients (split into subgroups of corresponding sizes 60, 30, 20 and 10) receive no treatment.

The overall results make it look like the treatment was beneficial to patients, with a higher recovery rate for patients with the treatment than for those without it.

The Conversation, CC BY-ND

However, when you drill down into the various groups that made up the cohort in the study, you see in all groups of patients, the recovery rate was 50% higher for patients who had no treatment.

The Conversation, CC BY-ND

But note that the size and age distribution of each group is different between those who took the treatment and those who didn’t. This is what distorts the numbers. In this case, the treatment group is disproportionately stacked with children, whose recovery rates are typically higher, with or without treatment.


Base rate fallacy

What is it?

This fallacy occurs when we disregard important information when making a judgement on how likely something is.

If, for example, we hear that someone loves music, we might think it’s more likely they’re a professional musician than an accountant. However, there are many more accountants than there are professional musicians. Here we have neglected that the base rate for the number of accountants is far higher than the number of musicians, so we were unduly swayed by the information that the person likes music.

How does it happen?

The base rate fallacy occurs when the base rate for one option is substantially higher than for another.

Example

Consider testing for a rare medical condition, such as one that affects only 4% (1 in 25) of a population.

Let’s say there is a test for the condition, but it’s not perfect. If someone has the condition, the test will correctly identify them as being ill around 92% of the time. If someone doesn’t have the condition, the test will correctly identify them as being healthy 75% of the time.

So if we test a group of people, and find that over a quarter of them are diagnosed as being ill, we might expect that most of these people really do have the condition. But we’d be wrong.


In a typical sample of 300 patients, for every 11 people correctly identified as unwell, a further 72 are incorrectly identified as unwell. The Conversation, CC BY-ND

According to our numbers above, of the 4% of patients who are ill, almost 92% will be correctly diagnosed as ill (that is, about 3.67% of the overall population). But of the 96% of patients who are not ill, 25% will be incorrectly diagnosed as ill (that’s 24% of the overall population).

What this means is that of the approximately 27.67% of the population who are diagnosed as ill, only around 3.67% actually are. So of the people who were diagnosed as ill, only around 13% (that is, 3.67%/27.67%) actually are unwell.

Worryingly, when a famous study asked general practitioners to perform a similar calculation to inform patients of the correct risks associated with mammogram results, just 15% of them did so correctly.


Will Rogers paradox

What is it?

This occurs when moving something from one group to another raises the average of both groups, even though no values actually increase.

The name comes from the American comedian Will Rogers, who joked that “when the Okies left Oklahoma and moved to California, they raised the average intelligence in both states”.

Former New Zealand Prime Minister Rob Muldoon provided a local variant on the joke in the 1980s, regarding migration from his nation into Australia.

How does it happen?

When a datapoint is reclassified from one group to another, if the point is below the average of the group it is leaving, but above the average of the one it is joining, both groups’ averages will increase.

Example

Consider the case of six patients whose life expectancies (in years) have been assessed as being 40, 50, 60, 70, 80 and 90.

The patients who have life expectancies of 40 and 50 have been diagnosed with a medical condition; the other four have not. This gives an average life expectancy within diagnosed patients of 45 years and within non-diagnosed patients of 75 years.

If an improved diagnostic tool is developed that detects the condition in the patient with the 60-year life expectancy, then the average within both groups rises by 5 years.

The Conversation, CC BY-ND

Berkson’s paradox

What is it?

Berkson’s paradox can make it look like there’s an association between two independent variables when there isn’t one.

How does it happen?

This happens when we have a set with two independent variables, which means they should be entirely unrelated. But if we only look at a subset of the whole population, it can look like there is a negative trend between the two variables.

This can occur when the subset is not an unbiased sample of the whole population. It has been frequently cited in medical statistics. For example, if patients only present at a clinic with disease A, disease B or both, then even if the two diseases are independent, a negative association between them may be observed.

Example

Consider the case of a school that recruits students based on both academic and sporting ability. Assume that these two skills are totally independent of each other. That is, in the whole population, an excellent sportsperson is just as likely to be strong or weak academically as is someone who’s poor at sport.

If the school admits only students who are excellent academically, excellent at sport or excellent at both, then within this group it would appear that sporting ability is negatively correlated with academic ability.

To illustrate, assume that every potential student is ranked on both academic and sporting ability from 1 to 10. There are an equal proportion of people in each band for each skill. Knowing a person’s band in either skill does not tell you anything about their likely band in the other.

Assume now that the school only admits students who are at band 9 or 10 in at least one of the skills.

If we look at the whole population, the average academic rank of the weakest sportsperson and the best sportsperson are both equal (5.5).

However, within the set of admitted students, the average academic rank of the elite sportsperson is still that of the whole population (5.5), but the average academic rank of the weakest sportsperson is 9.5, wrongly implying a negative correlation between the two abilities.

The Conversation, CC BY-ND

Multiple comparisons fallacy

What is it?

This is where unexpected trends can occur through random chance alone in a data set with a large number of variables.

How does it happen?

When looking at many variables and mining for trends, it is easy to overlook how many possible trends you are testing. For example, with 1,000 variables, there are almost half a million (1,000×999/2) potential pairs of variables that might appear correlated by pure chance alone.

While each pair is extremely unlikely to look dependent, the chances are that from the half million pairs, quite a few will look dependent.

Example

The Birthday paradox is a classic example of the multiple comparisons fallacy.

In a group of 23 people (assuming each of their birthdays is an independently chosen day of the year with all days equally likely), it is more likely than not that at least two of the group have the same birthday.

People often disbelieve this, recalling that it is rare that they meet someone who shares their own birthday. If you just pick two people, the chance they share a birthday is, of course, low (roughly 1 in 365, which is less than 0.3%).

However, with 23 people there are 253 (23×22/2) pairs of people who might have a common birthday. So by looking across the whole group you are testing to see if any one of these 253 pairings, each of which independently has a 0.3% chance of coinciding, does indeed match. These many possibilities of a pair actually make it statistically very likely for coincidental matches to arise.

For a group of as few as 40 people, it is almost nine times as likely that there is a shared birthday than not.

The probability of no shared birthdays drops as the number of people in a group increases. The Conversation, CC BY-ND

Stephen Woodcock, Senior Lecturer in Mathematics, University of Technology Sydney

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

Leave a comment

Filed under Reblogs

Conjunction Fallacy

One of the central assumptions of mainstream economics has been that people make rational choices.  As a challenge to this assumption, Nobel prize winning behavioural economist Prof. Daniel Kahneman gives an example where some Americans were offered a choice of insurance against their own death in a terrorist attack while on a trip to Europe, or insurance that would cover death of any kind on the trip. People were willing to pay more for the former insurance, even though ‘death of any kind’ includes ‘death in a terrorist attack’.

This is an instance of the Conjunction Fallacy, which is based on the false assumption that specific conditions are more probable than general ones.  This fallacy usually stems from thinking the choices are alternatives, rather than members of the same set.

The logical form of this fallacy is:

Premise: X is a subset of Y.

Conclusion: Therefore, X is more probable than Y.

The probability of a conjunction is never greater than the probability of its conjuncts. In other words, the probability of two things being true can never be greater than the probability of one of them being true, since in order for both to be true, each must be true. However, when people are asked to compare the probabilities of a conjunction and one of its conjuncts, they sometimes judge that the conjunction is more likely than one of its conjuncts. This seems to happen when the conjunction suggests a scenario that is more easily imagined than the conjunct alone.

Interestingly, Kahneman discovered in earlier experiments that statistical sophistication made little difference in the rates at which people committed the conjunction fallacy. This suggests that it is not enough to teach probability theory alone, but that people need to learn directly about the conjunction fallacy in order to counteract the strong psychological effect of imaginability.

Leave a comment

Filed under Logical fallacies

Why should we place our faith in science?

The Conversation

Jonathan Keith, Monash University

Most of us would like to think scientific debate does not operate like the comments section of online news articles. These are frequently characterised by inflexibility, truculence and expostulation. Scientists are generally a little more civil, but sometimes not much so!

There is a more fundamental issue here than politeness, though. Science has a reputation as an arbiter of fact above and beyond just personal opinion or bias. The term “scientific method” suggests there exists an agreed upon procedure for processing evidence which, while not infallible, is at least impartial.

So when even the most respected scientists can arrive at different deeply held convictions when presented with the same evidence, it undermines the perceived impartiality of the scientific method. It demonstrates that science involves an element of subjective or personal judgement.

Yet personal judgements are not mere occasional intruders on science, they are a necessary part of almost every step of reasoning about evidence.

Among the judgements scientists make on a daily basis are: what evidence is relevant to a particular question; what answers are admissible a priori; which answer does the evidence support; what standard of evidence is required (since “extraordinary claims require extraordinary evidence”); and is the evidence sufficient to justify belief?

Another judgement scientists make is whether the predictions of a model are sufficiently reliable to justify committing resources to a course of action.

We do not have universally agreed procedures for making any of these judgements. This should come as no surprise. Evidence is something experienced by persons, and a person is thus essential to relating evidence to the abstractions of a scientific theory.

This is true regardless of how directly the objects of a theory are experienced – whether we observe a bird in flight or its shadow on the ground – ultimately it is the unique neuronal configurations of an individual brain that determine how what we perceive influences what we believe.

Induction, falsification and probability

Nevertheless, we can ask: are there forms of reasoning about evidence that do not depend on personal judgement?

Induction is the act of generalising from particulars. It interprets a pattern observed in specific data in terms of a law governing a wider scope.

But induction, like any form of reasoning about evidence, demands personal judgement. Patterns observed in data invariably admit multiple alternative generalisations. And which generalisation is appropriate, if any, may come down to taste.

Many of the points of contention between Richard Dawkins and the late Stephen Jay Gould can be seen in this light. For example, Gould thought Dawkins too eager to attribute evolved traits to the action of natural selection in cases where contingent survival provides an alternative, and (to Gould) preferable, explanation.

One important statement of the problem of induction was made by 18th-century philosopher David Hume. He noted the only available justification for inductive reasoning is that it works well in practice. But this itself is an inductive argument, and thus “taking that for granted, which is the very point in question”.

Karl Popper wanted science to be based on the deductive reasoning of falsificationism rather than the inductive reasoning of verificationism. Lucinda Douglas-Menzies/Wikimedia

Hume thought we had to accept this circularity, but philosopher of science Karl Popper rejected induction entirely. Popper argued that evidence can only falsify a theory, never verify it. Scientific theories are thus only ever working hypotheses that have withstood attempts at falsification.

This characterisation of science has not prevailed, mainly because science has not historically proceeded in this manner, nor does it today. Thomas Kuhn observed that:

No process yet disclosed by the historical study of scientific development at all resembles the methodological stereotype of falsification by direct comparison with nature.

Scientists cherish their theories, having invested so much of their personal resources in them. So when a seemingly contradictory datum emerges, they are inclined to make minor adjustments rather than reject core tenets. As physicist Max Planck observed (before Popper or Kuhn):

A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it.

Falsification also ignores the relationship between science and engineering. Technology stakes human lives and personal resources on the reliability of scientific theories. We could not do this without strong belief in their adequacy. Engineers thus demand more from science than a working hypothesis.

Some philosophers of science look to probabilistic reasoning to place science above personal judgement. Prominent proponents of such approaches include Elliot Sober and Edwin Thompson Jaynes. By these accounts one can compare competing scientific theories in terms of the likelihood of observed evidence under each.

However, probabilistic reasoning does not remove personal judgement from science. Rather, it channels it into the design of models. A model, in this sense, is a mathematical representation of the probabilistic relationships between theory and evidence.

As someone who designs such models for a living, I can tell you the process relies heavily on personal judgement. There are no universally applicable procedures for model construction. Consequently, the point at issue in scientific controversies may be precisely how to model the relationship between theory and evidence.

What is (and isn’t) special about science

Does acknowledging the role played by personal judgement erode our confidence in science as a special means of acquiring knowledge? It does, if what we thought was special about science is that it removes the personal element from the search for truth.

As scientists – or as defenders of science – we must guard against the desire to dominate our interlocutors by ascribing to science a higher authority than it plausibly possesses. Many of us have experienced the frustration of seeing science ignored or distorted in arguments about climate change or vaccinations to name just two.

But we do science no favours by misrepresenting its claim to authority; instead we create a monster. A misplaced faith in science can and has been used as a political weapon to manipulate populations and impose ideologies.

Instead we need to explain science in terms that non-scientists can understand, so that factors that have influenced our judgements can influence theirs.

It is appropriate that non-scientists subordinate their judgements to that of experts, but this deference must be earned. The reputation of an individual scientist for integrity and quality of research is thus crucial in public discussions of science.

I believe science is special, and deserves the role of arbiter that society accords it. But its specialness does not derive from a unique mode of reasoning.

Rather it is the minutiae of science that make it special: the collection of lab protocols, recording practices, publication and peer review standards and many others. These have evolved over centuries under constant pressure to produce useful and reliable knowledge.

Thus, by a kind of natural selection, science has acquired a remarkable capacity to reveal truth. Science continues to evolve, so that what is special about science today might not be what will be special about it tomorrow.

So how much faith should you put in the conclusions of scientists? Judge for yourself!

The ConversationJonathan Keith, Associate Professor, School of Mathematical Sciences, Monash University

This article was originally published on The Conversation. (Reblogged by permission) . Read the original article.
 

Leave a comment

Filed under Reblogs

Monty Hall solution

Players initially have a 2/3 chance of picking a goat. Those who swap always get the opposite of their original choice, so those who swap have 2/3 chance of winning the car. Players who stick have a 1/3 chance of winning the car.  The solution is based on the premise that the host knows which door hides the car and intentionally reveals a goat. If the player selected the door hiding the car (1/3), then both remaining doors hide goats and the host may choose either door at random, and switching doors loses in 1/3. On the other hand, if the player initially selected a door that hides a goat (a 2-in-3 chance), then the host’s choice is no longer at random, as he is forced to show the second goat only, and switching doors wins for sure in 2/3.

Leave a comment

Filed under Puzzles

The Monty Hall Puzzle

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

Leave a comment

Filed under Puzzles

What is logic?

The word ‘logic‘ is not easy to define, because it has slightly different meanings in various applications ranging from philosophy, to mathematics to computer science. In philosophy, logic determines the principles of correct reasoning. It’s a systematic method of evaluating arguments and reasoning, aiming to distinguish good (valid and sound) reasoning from bad (invalid or unsound) reasoning.

The essential difference between informal logic and formal logic is that informal logic uses natural language, whereas formal logic (also known as symbolic logic) is more complex and uses mathematical symbols to overcome the frequent ambiguity or imprecision of natural language. Reason is the application of logic to actual premises, with a view to drawing valid or sound conclusions. Logic is the rules to be followed, independently of particular premises, or in other words using abstract premises designated by letters such as P and Q.

So what is an argument? In everyday life, we use the word ‘argument’ to mean a verbal dispute or disagreement (which is actually a clash between two or more arguments put forward by different people). This is not the way this word is usually used in philosophical logic, where arguments are those statements a person makes in the attempt to convince someone of something, or present reasons for accepting a given conclusion. In this sense, an argument consist of statements or propositions, called its premises, from which a conclusion is claimed to follow (in the case of a deductive argument) or be inferred (in the case of an inductive argument). Deductive conclusions usually begin with a word like ‘therefore’, ‘thus’, ‘so’ or ‘it follows that’.

A good argument is one that has two virtues: good form and all true premises. Arguments can be either deductiveinductive  or abductive. A deductive argument with valid form and true premises is said to be sound. An inductive argument based on strong evidence is said to be cogent. The term ‘good argument’ covers all three of these types of arguments.

Deductive arguments

A valid argument is a deductive argument where the conclusion necessarily follows from the premises, because of the logical structure of the argument. That is, if the premises are true, then the conclusion must also be true. Conversely, an invalid argument is one where the conclusion does not logically follow from the premises. However, the validity or invalidity of arguments must be clearly distinguished from the truth or falsity of its premises. It is possible for the conclusion of a valid argument to be true, even though one or more of its premises are false. For example, consider the following argument:

Premise 1: Napoleon was German
Premise 2: All Germans are Europeans
Conclusion: Therefore, Napoleon was European

The conclusion that Napoleon was European is true, even though Premise 1 is false. This argument is valid because of its logical structure, not because its premises and conclusion are all true (which they are not). Even if the premises and conclusion were all true, it wouldn’t necessarily mean that the argument was valid. If an argument has true premises and its form is valid, then its conclusion must be true.

Deductive logic is essentially about consistency. The rules of logic are not arbitrary, like the rules for a game of chess. They exist to avoid internal contradictions within an argument. For example, if we have an argument with the following premises:

Premise 1: Napoleon was either German or French
Premise 2: Napoleon was not German

The conclusion cannot logically be “Therefore, Napoleon was German” because that would directly contradict Premise 2. So the logical conclusion can only be: “Therefore, Napoleon was French”, not because we know that it happens to be true, but because it is the only possible conclusion if both the premises are true. This is admittedly a simple and self-evident example, but similar reasoning applies to more complex arguments where the rules of logic are not so self-evident. In summary, the rules of logic exist because breaking the rules would entail internal contradictions within the argument.

Inductive arguments

An inductive argument is one where the premises seek to supply strong evidence for (not absolute proof of) the truth of the conclusion. While the conclusion of a sound deductive argument is supposed to be certain, the conclusion of a cogent inductive argument is supposed to be probable, based upon the evidence given. Here’s a classic example of an inductive argument:

  1. Premise: Every time you’ve eaten peanuts, you’ve had an allergic reaction.
  2. Conclusion: You are likely allergic to peanuts.

In this example, the specific observations are instances of eating peanuts and having allergic reactions. From these observations, you generalize that you are probably allergic to peanuts. The conclusion is not certain, but if the premise is true (i.e., every time you’ve eaten peanuts, you’ve had an allergic reaction), then the conclusion is likely to be true as well.

Whilst an inductive argument based on strong evidence can be cogent, there is some dispute amongst philosophers as to the reliability of induction as a scientific method. For example, by the problem of induction, no number of confirming observations can verify a universal generalization, such as ‘All swans are white’, yet it is logically possible to falsify it by observing a single black swan.

Abductive arguments

Abduction may be described as an “inference to the best explanation”, and whilst not as reliable as deduction or induction, it can still be a useful form of reasoning. For example, a typical abductive reasoning process used by doctors in diagnosis might be: “this set of symptoms could be caused by illnesses X, Y or Z. If I ask some more questions or conduct some tests I can rule out X and Y, so it must be Z.

Incidentally, the doctor is the one who is doing the abduction here, not the patient. By accepting the doctor’s diagnosis, the patient is using inductive reasoning that the doctor has a sufficiently high probability of being right that it is rational to accept the diagnosis. This is actually an acceptable form of the Argument from Authority (only the deductive form is fallacious).

References:

Hodges, W. (1977) Logic – an introduction to elementary logic (2nd ed. 2001) Penguin, London.
Lemmon, E.J. (1987) Beginning Logic. Hackett Publishing Company, Indianapolis.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

20 Comments

Filed under Essays and talks

Argument from authority

by Tim Harding B.Sc., B.A.

The Argument from Authority is often misunderstood to be a fallacy in all cases, when this is not necessarily so. The argument becomes a fallacy only when used deductively, or where there is insufficient inductive strength to support the conclusion of the argument.

The most general form of the deductive fallacy is:

Premise 1: Source A says that statement p is true.
Premise 2: Source A is authoritative.
Conclusion: Therefore, statement p is true.

Even when the source is authoritative, this argument is still deductively invalid because the premises can be true, and the conclusion false (i.e. an authoritative claim can turn out to be false).[1] This fallacy is known as ‘Appeal to Authority’.

The fallacy is compounded when the source is not an authority on the relevant subject matter. This is known as Argument from false or misleading authority.

Although reliable authorities are correct in judgments related to their area of expertise more often than laypersons, they can occasionally come to the wrong judgments through error, bias or dishonesty. Thus, the argument from authority is at best a probabilistic inductive argument rather than a deductive  argument for establishing facts with certainty. Nevertheless, the probability sometimes can be very high – enough to qualify as a convincing cogent argument. For example, astrophysicists tell us that black holes exist. The rest of us are in no position to either verify or refute this claim. It is rational to accept the claim as being true, unless and until the claim is shown to be false by future astrophysicists (the first of whom would probably win a Nobel Prize for doing so). An alternative explanation that astrophysicists are engaged in a worldwide conspiracy to deceive us all would be implausible and irrational.

“…if an overwhelming majority of experts say something is true, then any sensible non-expert should assume that they are probably right.” [2]

Thus there is no fallacy entailed in arguing that the advice of an expert in his or her field should be accepted as true, at least for the time being, unless and until it is effectively refuted. A fallacy only arises when it is claimed or implied that the expert is infallible and that therefore his or her advice must be true as a deductive argument, rather than as a matter of probability.  Criticisms of cogent arguments from authority[3] can actually be a rejection of expertise, which is a fallacy of its own.

The Argument from Authority is sometimes mistakenly confused with the citation of references, when done to provide published evidence in support of the point the advocate is trying to make. In these cases, the advocate is not just appealing to the authority of the author, but providing the source of evidence so that readers can check the evidence themselves if they wish. Such citations of evidence are not only acceptable reasoning, but are necessary to avoid plagiarism.

Expert opinion can also constitute evidence and is often accepted as such by the courts.  For example, if you describe your symptoms to your doctor and he or she provides an opinion that you have a certain illness, that opinion is evidence that you have that illness. It is not necessary for your doctor to cite references when giving you his or her expert opinion, let alone convince you with a cogent argument. In some cases, expert opinion can carry sufficient inductive strength on its own.


[1] If the premises can be true, but the conclusion can be false, then the argument is logically invalid.

[2] Lynas, Mark (29 April 2013) Time to call out the anti-GMO conspiracy theory.

[3] An inductive argument based on strong evidence is said to be cogent.

5 Comments

Filed under Logical fallacies