Monthly Archives: June 2013

Argument from authority

by Tim Harding B.Sc., B.A.

The Argument from Authority is often misunderstood to be a fallacy in all cases, when this is not necessarily so. The argument becomes a fallacy only when used deductively, or where there is insufficient inductive strength to support the conclusion of the argument.

The most general form of the deductive fallacy is:

Premise 1: Source A says that statement p is true.
Premise 2: Source A is authoritative.
Conclusion: Therefore, statement p is true.

Even when the source is authoritative, this argument is still deductively invalid because the premises can be true, and the conclusion false (i.e. an authoritative claim can turn out to be false).[1] This fallacy is known as ‘Appeal to Authority’.

The fallacy is compounded when the source is not an authority on the relevant subject matter. This is known as Argument from false or misleading authority.

Although reliable authorities are correct in judgments related to their area of expertise more often than laypersons, they can occasionally come to the wrong judgments through error, bias or dishonesty. Thus, the argument from authority is at best a probabilistic inductive argument rather than a deductive  argument for establishing facts with certainty. Nevertheless, the probability sometimes can be very high – enough to qualify as a convincing cogent argument. For example, astrophysicists tell us that black holes exist. The rest of us are in no position to either verify or refute this claim. It is rational to accept the claim as being true, unless and until the claim is shown to be false by future astrophysicists (the first of whom would probably win a Nobel Prize for doing so). An alternative explanation that astrophysicists are engaged in a worldwide conspiracy to deceive us all would be implausible and irrational.

“…if an overwhelming majority of experts say something is true, then any sensible non-expert should assume that they are probably right.” [2]

Thus there is no fallacy entailed in arguing that the advice of an expert in his or her field should be accepted as true, at least for the time being, unless and until it is effectively refuted. A fallacy only arises when it is claimed or implied that the expert is infallible and that therefore his or her advice must be true as a deductive argument, rather than as a matter of probability.  Criticisms of cogent arguments from authority[3] can actually be a rejection of expertise, which is a fallacy of its own.

The Argument from Authority is sometimes mistakenly confused with the citation of references, when done to provide published evidence in support of the point the advocate is trying to make. In these cases, the advocate is not just appealing to the authority of the author, but providing the source of evidence so that readers can check the evidence themselves if they wish. Such citations of evidence are not only acceptable reasoning, but are necessary to avoid plagiarism.

Expert opinion can also constitute evidence and is often accepted as such by the courts.  For example, if you describe your symptoms to your doctor and he or she provides an opinion that you have a certain illness, that opinion is evidence that you have that illness. It is not necessary for your doctor to cite references when giving you his or her expert opinion, let alone convince you with a cogent argument. In some cases, expert opinion can carry sufficient inductive strength on its own.


[1] If the premises can be true, but the conclusion can be false, then the argument is logically invalid.

[2] Lynas, Mark (29 April 2013) Time to call out the anti-GMO conspiracy theory.

[3] An inductive argument based on strong evidence is said to be cogent.

5 Comments

Filed under Logical fallacies

Three more fallacies of relevance

by Tim Harding

Fallacies are patterns of reasoning that are logically incorrect.  They apply to arguments rather than isolated statements or propositions.  Arguments are logically valid or invalid, whereas propositions are true or false. The fallacies of relevance, for example, clearly fail to provide adequate reason for believing the truth of their conclusions.  Although they are often used in attempts to persuade people by non-logical means, only the unwary, the predisposed, and the gullible are apt to be fooled by their illegitimate appeals.  Many of them were identified by medieval and renaissance logicians, some of whose Latin names for them have passed into common use.  It’s worthwhile to consider the structure, offer an example, and point out the invalidity of each of them in turn. [i]

I have previously talked here at the Mordi Skeptics about three fallacies of relevance:

  • Appeal to Popularity (argumentum ad popularum)
  • Appeal to Authority (argumentum ad verecundiam)
  • Argument from Personal Abuse (argumentum ad hominem)

I would now like to briefly talk about three more fallacies of relevance:

Appeal to Pity (argumentum ad misericordiam)

An Appeal to Pity tries to win acceptance by pointing out the unfortunate consequences that will otherwise fall upon the speaker and others, for whom we would then feel sorry.

P1: I am a single parent, solely responsible for the financial support of my children.

P2: If you give me this traffic ticket, I will lose my licence and be unable to drive to work.

P3: If I cannot work, my children and I will become homeless and may starve to death.

C: Therefore, you should not give me this traffic ticket.

The conclusion may be false (that is, perhaps I should be given the ticket) even if the premises are all true, so the argument is fallacious.

Appeal to Force
(argumentum ad baculum)

Turning this on its head, in the Appeal to Force, someone in a position of power threatens to bring down unfortunate consequences upon anyone who dares to disagree with a proffered proposition.  Although it is rarely developed so explicitly, a fallacy of this type might propose:

P1: If you do not agree with the Government’s position, we will cut funding for your scientific research.

P2: The Government’s position is that cattle grazing in alpine national parks reduces bushfire risk.

C: Therefore, cattle grazing in alpine national parks reduces bushfire risk.

Again, it should be clear that even if all of the premises were true, the conclusion could nevertheless be false.  Since that is possible, arguments of this form are plainly invalid.  While this might be an effective way to get you to agree (or at least to pretend to agree) with the Government’s position,[ii] it offers no grounds for believing it to be true.

Guilt by association (a type of ad hominem argument)

Guilt by Association relies upon emotively charged language to arouse feelings and prejudices that may lead an audience to accept its conclusion:

P1: As all clear-thinking residents of our fine state have already realized, opposition to cattle grazing in alpine national parks is nothing but the dangerous deluded dingo of greenie anti-farming propaganda cleverly disguised in the harmless sheep’s clothing of science.

C: Therefore, banning cattle grazing in alpine national parks is bad public policy.

The problem here is that although the flowery language of the premise might arouse strong feelings in many members of its intended audience, the widespread occurrence of those feelings has nothing to do with the truth of the conclusion.


[i] Most of the information on these pages has come from http://www.philosophypages.com/lg/e06a.htm, although I have devised some of my own examples of more local relevance.

[ii] Of course, public servants are required to implement lawful Government policy whether they agree with it or not; but scientists are supposed to provide independent scientific advice.

Leave a comment

Filed under Logical fallacies

Reasoning

Rationality may be defined as as the quality of being consistent with or using reason, which is further defined as the mental ability to draw inferences or conclusions from premises (the ‘if – then’ connection). The application of reason is known as reasoning; the main categories of which are deductive and inductive reasoning. A deductive argument with valid form and true premises is said to be sound. An inductive argument based on strong evidence is said to be cogent. It is rational to accept the conclusions of arguments that are sound or cogent, unless and until they are effectively refuted.

A fallacy is an error of reasoning resulting in a misconception or false conclusion. A fallacious argument can be deductively invalid or one that has insufficient inductive strength. A deductively invalid argument is one where the conclusion does not logically follow from the premises. That is , the conclusion can be false even if the premises are true. An example of an inductively invalid argument is a conclusion that smoking does not cause cancer based on the anecdotal evidence of only one healthy smoker.

By accident or design, fallacies may exploit emotional triggers in the listener (e.g. appeal to emotion), or take advantage of social relationships between people (e.g. argument from authority). By definition, a belief arising from a logical fallacy is contrary to reason and is therefore irrational, even though a small number of such beliefs might possibly be true by coincidence.

Leave a comment

Filed under Uncategorized

DNA and GM foods

by Tim Harding B.Sc., B.A.

(An edited version of this essay was published in The Skeptic magazine, September 2014, Vol 34 No 3, under the title ‘The Good Oil’.  The essay is based on a talk presented to the Mordi Skeptics, Tuesday 5 April 2011; and later to the Sydney Skepticamp, 30th April 2011.)

In May 2014, a farmer accused of ‘contaminating’ his neighbour’s land with genetically modified canola won a highly publicised civil case in the Western Australian Supreme Court (Marsh v. Baxter, 2014).  Although the case was about a claim of conflicting land use rather than food safety, it fired up the long-running community debate about genetically modified foods in Australia.  It also exposed a lot of misinformation and misunderstanding about DNA and genetic modification.

This essay discusses the nature and structure of DNA; together with the history of its discovery. It makes the point that artificial selection been occurring since the dawn of civilisation; and that the outcome of different methods of artificial selection is the same – modification of the genetic code by human intervention. Not only is there no evidence that genetically modified foods are unsafe to eat, but there is no mechanism by which they could be unsafe.

Brief history of DNA research

The rules of genetics were largely understood since Gregor Mendel’s ‘wrinkled pea’ experiments in the 1860s but the mechanisms of inheritance remained a mystery.   Charles Darwin knew in the 1850s there must have been such a mechanism (but his later speculations about it – called pangenesis – were wrong).[1]  The units of inheritance were called genes, but it was not understood where genes were located in the body or what they physically consisted of.

After the rediscovery of Mendel’s work in the 1890s, scientists tried to determine which molecules in the cell were responsible for inheritance.  In 1910, Thomas Hunt Morgan argued that genes are on chromosomes, based on observations of a sex-linked white eye mutation in fruit flies.  In 1913, his student Alfred Sturtevant used the phenomenon of genetic linkage to show that genes are arranged linearly on the chromosome.  It was soon discovered that chromosomes consisted of DNA and proteins, but DNA was not identified as the gene carrier until 1944.

Watson and Crick’s breakthrough discovery of the chemical structure of DNA in 1953 finally revealed how genetic instructions are stored inside organisms and passed from generation to generation.[2]  In the following years, scientists tried to understand how DNA controls the process of protein production. It was discovered that the cell uses DNA as a template to create matching messenger RNA (a single-strand molecule with nucleotides, very similar to DNA). The nucleotide sequence of a messenger RNA is used to create an amino acid sequence in protein; this translation between nucleotide and amino acid sequences is known as the genetic code.

DNA structure

The molecular basis for genes is deoxyribonucleic acid (DNA) a double-stranded molecule, coiled into the shape of a double-helix.  DNA is composed of twin backbones of sugars and phosphate groups joined by ester bonds.  These backbones hold together a chain of nucleotides, of which there are four types: adenine (A), cytosine (C), guanine (G), and thymine (T).  Genetic information in all living things exists in the sequence of these nucleotides, and genes exist as stretches of sequence along the DNA chain.[3]  Each nucleotide in DNA preferentially pairs with its partner nucleotide on the opposite strand: A pairs with T, and C pairs with G using weak hydrogen bonds.  Thus, in its two-stranded form, each strand effectively contains all necessary information, redundant with its anti-parallel partner strand.  This structure of DNA is the physical basis for inheritance: DNA replication duplicates the genetic information by enzymes splitting the strands (like a zipper) and using each strand as a template for synthesis of a new partner strand.

Chemical structure of DNA (Source: Wikimedia Commons)

Chemical structure of DNA (Source: Wikimedia Commons)

The sequence of these nucleotides A, C, G and T is a code, similar to the binary digital code used in computing.  When you consider that all the instructions for everything that computers can produce: text, calculations, music and images is stored as a binary sequence of ones and zeros, it is not hard to conceive how the instructions for making and operating living organisms can be stored as a four letter code.

Genes are arranged linearly along very long chains of DNA sequence, which comprise the chromosomes.  In bacteria, each cell usually contains a single circular chromosome, while eukaryotic organisms (including plants and animals) have their DNA arranged in multiple linear chromosomes.  These DNA strands are often extremely long; the largest human chromosome (No. 1), for example, is about 247 million base pairs in length. The full set of hereditary material in an organism (usually the combined DNA sequences of all 46 chromosomes in humans) is called the genome (approx. 3 billion base pairs in humans).

The genetic code is the set of rules by which information encoded in genetic material (DNA or mRNA sequences) is translated into proteins (amino acid sequences) by living cells.  The code defines a mapping between tri-nucleotide sequences, called codons, and amino acids. With some exceptions, a triplet codon in a nucleic acid sequence specifies a single amino acid.[4]

Translation of genetic code into proteins (Source: Wikimedia Commons)

Translation of genetic code into proteins (Source: Wikimedia Commons)

However, the human genome contains only ca. 23,000 protein-coding genes, far fewer than had been expected before its sequencing.  In fact, only about 1.5% of the genome codes for proteins, while the rest consists of non-coding RNA genes, regulatory sequences, introns, and noncoding DNA (once known as ‘junk DNA’). Genetic recombination during sexual reproduction involves the breaking and rejoining of two chromosomes (one from each parent) to produce two new rearranged chromosomes, thus providing genetic diversity and increasing the efficiency of natural selection.

Genetic modification

One of the biggest public misunderstandings is about the very term ‘genetic modification’.  Genes can be modified in 2 main ways:

Artificial selection can occur in 4 main ways:

  • traditional plant and animal breeding (long-term);
  • mutagenesis (random exposure to chemical or radiological mutagens);
  • RNA interference (switching genes on or off);
  • genetic engineering (short-term) – the targeted insertion or deletion of genes in the laboratory (which cannot easily be achieved by other methods).

The end result of the different methods of artificial selection is the same – modification of the genetic code by human intervention.  All DNA, whether modified naturally or artificially, is biochemically and nutritionally the same.  The only difference is in the genetic code, that is, the sequence of the bases G, C, T and A.  In other words, DNA is DNA – there are no such thing as ‘natural DNA’ and ‘artificial DNA’.

They are all ways of artificially modifying genes, yet for some illogical reason plant and animal breeding is not usually referred to in the media as genetic modification – possibly because it started a long time (c. 11,000 years ago) before genetics was understood.  However, to avoid any confusion, in this paper I will refer to genetic modification of foods by genetic engineering as genetically engineered foods (GE foods).

As a result of artificial selection, all farmed foods we eat today have been genetically modified by humans via plant and animal breeding.  This includes all meats except for wild game and kangaroo; and most farmed fish such as salmon.

Similarly all plants we eat (vegetables, fruits,  nuts, herbs and spices) have been genetically modified by humans.  Many varieties bear little resemblance to their original wild forms.  A wheat grain is a genetically modified grass seed.  Can anybody think of a plant food that has not been modified by humans?  (The only ones any of us at the meetup could think of were bush tucker, which is rarely found in Australian shops or supermarkets.  Seaweed was later suggested at the Sydney Skepticamp).

Whenever we eat and digest proteinaceous food, the DNA inside the food gets broken down into single nucleotides before absorption in the small intestine, destroying the genetic code anyway.  It is therefore logically impossible for any changes in the genetic code, whether artificial or natural, to make DNA unsafe to eat.

Not only is it logically impossible, but there is no empirical evidence that genetically modified foods are harmful.  The technology to produce genetically engineered (GE) plants is now over 30 years old, yet in all that time there has not been a single instance of anybody becoming ill, let alone dying, as a result of eating GE foods.

In a recent major review of the scientific literature on last 10 years of the world’s GE crop safety research, the reviewers conclude that ‘the scientific research conducted so far has not detected any significant hazard directly connected with the use of GE crops’.  The authors further believe that ‘genetic engineering and GE crops should be considered important options in the efforts towards sustainable agricultural production’ (Nicolia et al, 2013).

GE foods

GE foods can be produced by either cisgenesis (within the same species) or transgenesis (from different species).[5]  However, the point needs to be made that the human genome naturally contains genes resulting from billions of years of evolution – even genes from our fishy ancestors.  A substantial fraction of human genes seem to be shared among most known vertebrates.  For example, the published chimpanzee genome differs from that of the human genome by 1.23% in direct sequence comparisons.  We also share many genes with plants.

“The real question here is not whether there is a GMO tomato with a fish gene, but who cares? It’s not as if eating fish genes is inherently risky—people eat actual fish. Furthermore, by some estimates people share about 70 percent of their genes with fish. You have fish genes, and every plant you have ever eaten has fish genes; get over it.”[6]

GE foods were first put on the market in the early 1990s.  Typically, genetically modified foods are transgenic plant products: soybean, corn, canola, and cotton seed oil.  

GE genes may be present in whole foods, such as wheat, soybeans, maize and tomatoes.  The first commercially grown genetically modified whole food crop was a tomato (called FlavrSavr), which was modified to ripen without softening, in 1994.  These GE whole foods are not presently available in Australia.  GE food ingredients are, however, present in some Australian foods.  For example, soy flour in bread may have come from imported GE soybeans.[7]

In addition, various genetically engineered micro-organisms are routinely used as sources of enzymes for the manufacture of a variety of processed foods. These include alpha-amylase from bacteria, which converts starch to simple sugars, chymosin from bacteria or fungi that clots milk protein for cheese making, and pectinesterase from fungi which improves fruit juice clarity.

sugar

Genetic engineering can also be used to increase the amount of particular nutrients (like vitamins) in food crops. Research into this technique, sometimes called ‘nutritional enhancement’, is now at an advanced stage. For example, GE golden rice is an example of a white rice crop that has had the vitamin A gene from a daffodil plant inserted. This changes the colour and the vitamin level for countries where vitamin A deficiency is prevalent. Researchers are especially looking at major health problems like iron deficiency. The removal of the proteins that cause allergies from nuts (such as peanuts and Brazil nuts) is also being researched.[8]

Animal products have also been developed, although as of July 2010 none are currently on the market.  However, human insulin has been produced using GE E.coli bacteria since 1978.  In 2006 a pig was controversially engineered to produce omega-3 fatty acids through the expression of a roundworm gene.  Researchers have also developed a genetically-modified breed of pigs that are able to absorb plant phosphorus more efficiently, and as a consequence the phosphorus content of their manure is reduced by as much as 60%.

Once again, there is no evidence of any person being harmed by eating genetically engineered foods.  The reasons why genetically engineered whole foods are not yet available in Australia are political or emotional rather than scientific.

Benefits of GE foods

There is a need to produce inexpensive, safe and nutritious foods to help feed the world’s growing population. Genetic engineering may provide:

  • Sturdy plants able to withstand weather extremes (such drought);
  • Better quality food crops;
  • Higher nutritional yields in crops;
  • Inexpensive and nutritious food, like carrots with more antioxidants;
  • Foods with a greater shelf life, like tomatoes that taste better and last longer;
  • Food with medicinal (nutraceutical) benefits, such as edible vaccines – for example, bananas with bacterial or rotavirus antigens;
  • Crops resistant to disease and insects and produce that requires less chemical application, such as pesticide and herbicide resistant plants: for example, GE canola.[8]

Objections to GE foods

So why is there such significant opposition to GE foods from some vocal lobby groups? Critics have objected to GE foods on several grounds, including:

  • the appeal to nature fallacy (natural products are good and artificial products are bad);
  • alleged but unproven safety issues, (there is no evidence of any adverse health effects, including allergies, in the 20 years since GE foods became available);
  • marketing concerns about ‘contamination’ of so-called organic food crops by GMOs (such as in the Marsh -v-Baxter case);
  • ecological concerns about the spread of GMOs in the wild, and
  • economic or ideological concerns raised by the fact that these organisms are subject to intellectual property rights usually held by big businesses.

The only one of these objections that may have any scientific legitimacy is the ecological concern about the spread of GMOs in the wild.  However, the use of GE technology is highly regulated by Australian governments and any such ecological concerns are fully taken into account.

Current food regulations in Australia state that a GE food will only be approved for sale if it is safe and is as nutritious as its conventional counterparts.  Food regulatory authorities require that GE foods receive individual pre-market safety assessments prior to use in foods for human consumption.  The principle of ‘substantial equivalence’ is also used.  This means that an existing food is compared with its genetically modified counterpart to find any differences between the existing food and the new product.  An important to note is that Australia has the most rigorous food safety testing regime in the world, and that GE foods are tested even more rigorously than non-GE foods. Because of this higher level of testing, GE foods are likely to be safer than non-GE foods.

Foods certified as organic or biodynamic should not contain any GE ingredients, according to voluntary organic food industry guidelines.

Here is a list of 114 peer-reviewed articles and meta reviews, mostly published in moderate to high impact factor journals that support the safety of GMO crops over a wide range of hypotheses.  The consensus position of the American Association for the Advancement of Sciences on GM foods is:

“Indeed, the science is quite clear: crop improvement by the modern molecular techniques of biotechnology is safe… The World Health Organization, the American Medical Association, the U.S. National Academy of Sciences, the British Royal Society, and every other respected organization that has examined the evidence has come to the same conclusion: consuming foods containing ingredients derived from GM crops is no riskier than consuming the same foods containing ingredients from crop plants modified by conventional plant improvement techniques.”

References

American Association for the Advancement Of Science (2012). Statement by the AAAS Board of Directors On Labeling of Genetically Modified Foods. 20 October 2012.

Better Health Channel (2011) Fact Sheet – Genetically modified foods www.betterhealth.vic.gov.au Melbourne: State of Victoria.

Darwin, Charles (1868),The Variation of Animals and Plants under Domestication (1st ed.), London: John Murray.

Lynas, Mark (29 April 2013) Time to call out the anti-GMO conspiracy theory.  Mark Lynas speech hosted by the International Programs – College of Agriculture and Life Sciences (50th Anniversary Celebration) , and the Atkinson Center for a Sustainable Future, Cornell University.

Marsh -v-Baxter [2014] WASC 187 (28 May 2014).

Nocolia, A., Mazo, A., Veronesi, F., and Rosellini (2013) ‘An overview of the last 10 years of the genetically engineered crop safety research’. Critical Reviews in Biotechnology. Informa Healthcare USA Inc. ISSN: 0738-8552 (print) 1549-7801 (electronic).

Skeptical Raptor’s Blog. What does science say about GMO’s–they’re safe. Updated 19 November 2014.

Novella, Stephen (2014) ‘No Health Risks from GMOs’. The Science of Medicine .Volume 38.4, July/August 2014.

Watson J.D. and Crick F.H.C. (1953) A Structure for Deoxyribose Nucleic Acid. Nature 171 (4356): 737–738.

Other information is from Wikipedia and the author’s knowledge as a former biochemist.  (According to convention, anonymous Wikipedia pages, whilst thought to be mostly factually correct, are not citable as references).


[1] Darwin, 1868.

[2] Watson and Crick, 1953.

[3] Viruses are the only exception to this rule—sometimes viruses use the very similar molecule RNA instead of DNA as their genetic material.

[4] Not all genetic information is stored using the genetic code. All organisms’ DNA contains regulatory sequences, intergenic segments, and chromosomal structural areas that can contribute greatly to phenotype by controlling how the genes are expressed.  Those elements operate under sets of rules that are distinct from the codon-to-amino acid paradigm underlying the genetic code.

[5] For example, the gene from a fish that lives in very cold seas has been inserted into a strawberry, allowing the fruit to be frost-tolerant.  However, this has not as yet been done for currently available commercial food crops.

[6] Novella, 2014.

[7] Better Health Channel, 2011.

[8] Better Health Channel, 2011.

If you find the information on this blog useful, you might like to make a donation.

Make a Donation Button

15 Comments

Filed under Essays and talks

What is rationality?

(Paper presented by Tim Harding at Mordi Skeptics meetup, 1 February 2011. An edited version was published in The Skeptic magazine, Vol. 36 No. 4, December 2016)

What do we skeptics mean when we say that a belief is irrational?  How do we define rationality and irrationality?  Are there any objective tests of an irrational belief?

First, some definitions.  Most dictionaries define rationality as the state or quality of being rational.  Not a lot of help.  So what does it mean to be rational? Once again, most dictionaries define rational as being consistent with or based on or using reason,[1] which is further defined as the mental ability to draw inferences or conclusions from assumptions or premises (the ‘if – then’ connection).  The application of reason is known as reasoning; the main categories of which are deductive and inductive reasoning.[2]

Reason is thought by rationalists to be more reliable in determining what is true; in contrast to reliance on other factors such as authority, tradition, instinct, intuition, emotion, mysticism, superstition, faith or arbitrary choice (e.g. flipping a coin).  For example, we rationally determine the balance in our cheque book (between bank statements) by adding up the credits and subtracting the debits and bank fees.  An irrational way of doing it would be to pick a number at random – not very reliable, and any correct answer would be a mere coincidence, rather than the product of reasoning.

The ancient Greeks thought that rationality distinguishes humans from other animals.  ‘Man is a rational animal’ as Aristotle said.[3]  However, this distinction is becoming blurred by recent research indicating that other primate species such as chimpanzees can show a limited use of reason and therefore a degree of rationality.

The word rational can be used in several different contexts; for example rational behaviour (psychology), rational or optimal decision (economics); a rational process (science), and rational belief (philosophy).  However, it is not the purpose of this paper to discuss all uses of rationality – only those relevant to our use, that is, skepticism.

I would suggest that the context most relevant to skepticism (which could be described as a form of applied philosophy) is that of rational belief, because we skeptics often criticise the beliefs of paranormals, quacks, cults and pseudo-sciences on the grounds that they are irrational (which, of course, is the antonym of rational).[4]  However, the scientific context of a ‘rational process’ is also relevant to skepticism; and I will say more about this later.

In my view, the relevance of rational belief to skepticism is that we use it as a filter to determine what we should be skeptical about.  We skeptics are not necessarily skeptical of everything.  We believe what it is rational to believe, and we are skeptical of beliefs that are known to be or appear to be irrational.  That is why I think it is important for skeptics to clarify and understand the nature of rational belief.

Harvard philosophy professor Robert Nozick has proposed two criteria for rational belief:

  1. support by reasons that make the belief credible; and
  2. generation by a process that reliably produces true beliefs.[5]

Two thought experiments

I would now like to try a couple of little thought experiments.

Firstly, imagine if you will a primitive tribe in the remote mountains of New Guinea.  The chief of this tribe needs to predict whether or not it is going to rain tomorrow[6] so he can decide whether the men will go hunting or not.  So he consults the local witch doctor, who according to long tradition slaughters a chicken and examines the configuration of the dead chicken’s entrails.  Using this information, the local witch doctor then predicts that will not rain tomorrow.  Is this a rational belief?

In terms of Nozick’s criteria, we would probably say that this belief is irrational because it is neither supported by reasons that make the belief credible, nor is it generated by a process that reliably produces true beliefs.

But what if this local witch doctor’s predictions, using the chicken entrail process, have always been right?  In that case, it could be argued that the process meets Nozick’s criterion No. 2.  It could also be argued that because the New Guinea tribe have no school education, and believe that rain and the configuration of a chicken’s entrails are caused by the same spirit, that the reasons for the witch doctor’s predictions are credible to them.  Does this alter our assessment of the rationality of this belief?  Perhaps it does.

What if exactly the same process is used by a hippie commune in Nimbin, where hippies have had the benefit of a school education and therefore should be aware that there is no credible causal connection between the incidence of rain and the configuration of a chicken’s entrails.  Do these different circumstances alter our assessment of whether the belief is rational?  Perhaps they do again.

Secondly, until early December 2010, it was believed by the scientific community (and published in reputable peer-reviewed scientific journals) that the element arsenic is toxic to all life on Earth in even very small concentrations.[7]  However, NASA-supported researchers have discovered the first known microorganism on Earth able to thrive and reproduce using arsenic.  The microorganism, which lives in California’s Mono Lake, substitutes arsenic for phosphorus in some of its cellular components.[8]  Prior to this announcement by NASA, was it rational to believe that arsenic is toxic to all life on Earth in even very small concentrations?  In terms of Nozick’s criteria, the answer would be ‘yes’, even though we now know that belief was false.  Was it rational to hold this belief after the NASA announcement?  Given that the NASA scientific announcement is credible and was generated by reliable scientific processes, our answer would be ‘no’.

By these two thought experiments, I have tried to show how a rational process can lead to a belief which may be rational in certain contexts or circumstances and yet turn out to be false.  So truth is not necessarily an adequate test of a rational belief.  In other words, a rational belief is not necessarily true, and an irrational belief is not necessarily false.  On the other hand, a rational belief needs to be reasonable or credible in the circumstances; that is, a rational belief is one that is justified by reason.

Although an irrational belief is not necessarily false, we can say that because an irrational belief is unreliable and more likely to be false than a rational belief, we should therefore be more skeptical about beliefs that are known to be or appear to be irrational than about rational beliefs.

It is believed by some philosophers (notably A.C. Grayling) that a rational belief must be independent of emotions, personal feelings or any kind of instincts.  Any process of evaluation or analysis, that may be called rational, is expected to be objective, logical and ‘mechanical’.  If these minimum requirements are not satisfied i.e. if a person has been influenced by personal emotions, feelings, instincts or culturally specific, moral codes and norms, then the analysis may be termed irrational, due to the injection of subjective bias.

So let us now look at some other possible objective tests of irrational belief, including logical fallacies, emotional or faith-based rather than evidence-based beliefs, beliefs based on insufficient supporting evidence, beliefs derived from confirmation bias, beliefs incompatible with science and internally incoherent beliefs, and any others we would like to discuss at this meetup.

Logical fallacies

A logical fallacy is faulty reasoning in argumentation resulting in a misconception.  A fallacious argument can be deductively invalid or one that has insufficient inductive strength.  For example, the argument that smoking does not cause cancer based on the anecdotal evidence of only one healthy smoker.

By accident or design, fallacies may exploit emotional triggers in the listener or interlocutor (e.g. appeal to emotion), or take advantage of social relationships between people (e.g. argument from authority).  By definition, a belief arising from a logical fallacy is contrary to reason and is therefore irrational.

Emotional, instinctive or faith-based rather than evidence-based beliefs

In western literature, reason is often opposed to emotions or instincts — desires, fears, hates, drives, or passions.  Even in everyday speech, westerners tend to say for example that their passions made them behave contrary to reason, or that their reason kept the passions under control, often expressed in colloquial terms as the dilemma between following ‘the head’ (reason) ‘or the heart’ (emotions).

Faith involves a stance toward some claim that is not, at least presently, demonstrable by reason.  Thus faith is a kind of attitude of trust or assent. As such, it is ordinarily understood to involve an act of will or a commitment on the part of the believer.  People do not usually have faith in something they do not want to believe in.  Religious faith involves a belief that makes some kind of either an implicit or explicit reference to a transcendent source.  The basis for a person’s faith usually is understood to come from the authority of revelation.[9]  Faith-based belief without evidence is considered to be a virtue by the religiously devout; but a ‘sin’ by rationalists.

Emotional, instinctive and faith-based beliefs are held on grounds other than evidence or reason, and according to the definitions given in the first part of this paper are irrational.  This is not to say that such beliefs are necessarily wrong, bad or undesirable – simply that they are not derived from reason.

Though theologies and typically do not claim to be irrational, there is often a perceived conflict or tension between faith and tradition on the one hand, and reason on the other, as potentially competing sources of wisdom and truth.  Defenders of traditions and faiths typically maintain that there is no real conflict with reason, because reason itself is not enough to explain such things as the origins of the universe, or right and wrong, and so reason can and should be complemented by other sources of knowledge.  The counter claim to this is that there are actual conflicts between faith and reason (for instances, the Trial of Galileo, creationism vs evolution, stem-cell research etc).

Some relatively recent philosophers, most notably the logical positivists, have denied that there is a domain of thought or human existence rightly governed by faith, asserting instead that all meaningful statements and ideas are accessible to thorough rational examination.[10]

Insufficient supporting evidence

Some beliefs are not necessarily based on emotion or faith, and are not entirely devoid of evidence, but there is insufficient evidence to justify the belief.  Beliefs in UFOs, alien abductions and conspiracy theories such as the so-called Moon Landings Hoax fall into this category.

Confirmation bias – cherry-picking the evidence

Confirmation bias is a tendency for people to favour information that confirms their preconceptions or hypotheses regardless of whether the information is true.  As a result, people gather evidence and recall information from memory selectively, and interpret it in a biased way.  The biases appear in particular for emotionally significant issues, for established beliefs and for conspiracy theories.

For example, there is some evidence that in a very small number of cases there are adverse reactions to some vaccines in some patients.  But this argument against vaccination overlooks the overwhelming benefits of vaccination in preventing and in some cases eradicating infectious diseases.  In other words, the anti-vaccination campaigners do not take into account evidence contrary to their fixed beliefs.  Thus the beliefs of anti-vaccination campaigners and some conspiracy theorists are based on faulty reasoning; and are therefore irrational.

 Incompatibility with science

It has long been held that rationality requires rigorous rules for deciding whether a proposition should be believed.  Formal logic and mathematics provide the clearest examples of such rules.  Science has also been considered a model of rationality because it proceeds in accordance with scientific methods which provide the rules for gathering evidence and evaluating hypotheses on the basis of this evidence.[11]

One of the main purposes of scientific methods is to eliminate subjective biases and interfering factors in order to test hypotheses.  This is why scientists use techniques such as controls and double blind tests that we often hear about in sceptical discussions.

Where a belief is incompatible with science, either the belief must be false or the science must be wrong – they can’t both be right.  For example, homeopathy is incompatible with the science of chemistry; water-divining is incompatible with the science of physics and astrology is incompatible with the science of astronomy.  On this ground alone, pseudo-sciences like these are irrational.

Internally incoherent beliefs

Coherentism is a theory of epistemic justification.  It implies that for a belief to be justified it must belong to a coherent system of beliefs. For a system of beliefs to be coherent, the beliefs that make up that system must “cohere” with one another.  In other words, some of a person’s justified beliefs are justified because they derive their justification from other beliefs.  For example, take my belief that tomorrow is Wednesday.  That belief can be justified by two other beliefs: my belief that today is Tuesday and my belief that Tuesday is immediately followed by Wednesday.  But, if my belief that tomorrow is Wednesday derives its justification from these other beliefs, then my belief that tomorrow is Wednesday is justified only if these other beliefs are justified.[12]  If today is Monday, then my belief that tomorrow is Wednesday is incoherent and unjustified.

For example, the claim of homeopathy that ‘like cures like’ is incoherent with the practice of diluting substances to the point where there is nothing but water in a homeopathic dose.  Homeopathy makes no sense, or in other words is internally incoherent and therefore irrational.  We can all probably think of other paranormal and pseudo-science beliefs that are internally incoherent and therefore irrational.

Summary

In summary, rationality is the state or quality of being rational, which means as being consistent with or based on or using reason.

Reason is thought by rationalists to be more reliable in determining what is true; in contrast to reliance on factors such as authority, tradition, instinct, intuition, emotion, mysticism, superstition faith or arbitrary choice.

The word rational can be used in several different contexts; but the context most relevant to skepticism is that of rational belief, because we use it as a filter to determine what we should be sceptical about.  We skeptics are not skeptical of everything.  We believe what it is rational to believe, and we are skeptical of irrational beliefs.

Two criteria have been proposed by Nozick for a rational belief:

  1. support by reasons that make the belief credible; and
  2. generation by a process that reliably produces true beliefs.

A rational belief is not necessarily true, and an irrational belief is not necessarily false.  On the other hand, a rational belief needs to be reasonable or credible in the circumstances; that is, a rational belief is one that is justified by reason.  It needs to pass objective tests of irrationality.

Objective tests of irrational belief include logical fallacies, emotional or faith-based rather than evidence-based beliefs, beliefs based on insufficient supporting evidence, beliefs derived from confirmation bias, beliefs incompatible with science, internally incoherent beliefs and possibly other tests.

Although an irrational belief is not necessarily false, we can say that because an irrational belief is unreliable and more likely to be false than a rational belief, we should therefore be more skeptical about beliefs that are known to be or appear to be irrational than about rational beliefs.

References:

Fieser, J. and Dowden, B. eds (2011) Internet Encyclopedia of Philosophy <http://www.iep.utm.edu/>

Honderich, T. ed (2005) The Oxford Companion to Philosophy, 2nd edition. Oxford University Press, Oxford.

Nozick, R. (1993) The Nature of Rationality, Princeton University Press, Princeton.


[1] Meaning reason in the philosophical sense as defined here, rather than in the colloquial sense of a reason meaning any explanation for an action or event, whether or not the explanation is based on reason in the philosophical sense.

[2] Deductive vs inductive reasoning is a possible topic for a future meetup?

[3] Nozick, 1993 p.xi

[4] The term ‘non-rational’ means neither rational nor irrational, and applies to matters unrelated to truth or falsity such as taste or aesthetics.

[5] Nozick, 1993 p.xiv

[6] For the purpose of this thought experiment, we assume that it does not rain every day and there is no predictable pattern of rainfall in the area in question.

[7] Most chemicals can be toxic in sufficiently large concentrations.

[9] Feiser and Dowden et al, 2011.

[10] Feiser and Dowden et al, 2011.

[11] Honderich et al, 2005 p. 786.

[12] Feiser and Dowden et al, 2011.

If you find the information on this blog useful, you might like to make a donation.

Make a Donation Button

1 Comment

Filed under Essays and talks

Argument from Popularity

by Tim Harding

The informal fallacy known as argumentum ad populum means ’argument from popularity’ or ‘appeal to the people’.  This fallacy is essentially the same as ad numerum, appeal to the gallery, appeal to the masses, common practice, past practice, traditional knowledge, peer pressure, conventional wisdom, the bandwagon fallacy; and lastly truth by consensus, of which I shall say more later.

The Argument from Popularity fallacy may be defined as when an advocate asserts that because the great majority of people in general agree with his or her position on an issue, he or she must be right.[1]  In other words, if you suggest too strongly that someone’s claim or argument is correct simply because it’s what most people believe, then you’ve committed the fallacy of appeal to the people.  Similarly, if you suggest too strongly that someone’s claim or argument is mistaken simply because it’s not what most people believe, then you’ve also committed the fallacy.

Agreement with popular opinion is not necessarily a reliable sign of truth, and deviation from popular opinion is not necessarily a reliable sign of error, but if you assume it is and do so with enthusiasm, then you’re guilty of committing this fallacy.  The ‘too strongly’ mentioned above is important in the description of the fallacy because what most everyone believes is, for that reason, often likely to be true, all things considered.  However, the fallacy occurs when this degree of support is used as justification for the truth of the belief.[2]

It often happens that a true proposition is believed to be true by most people, but this is not the reason it is true.  In other words, correlation does not imply causation, and this confusion is the source of the fallacy, in my view.  For example, nearly every sane person believes that the proposition 1+1=2 is true, but that is not why it is true.  We can try doing empirical experiments by counting objects, and although this exercise is highly convincing, it is still only inductive reasoning rather than proof.  Put simply, the proposition 1+1=2 is true because it has been mathematically proven to be true.  But my purpose here is not to convince you that 1+1=2.  My real point is that the proportion of people who believe that 1+1=2 is true is irrelevant to the truth or falsity of this proposition.

Let us now consider a belief where its truth is less obvious.  Before the work of Copernicus and Galileo in the 15th and 16th centuries, most people (including the Roman Catholic Church) believed that the Sun revolved around the Earth, rather than vice versa as we now know through science.  So the popular belief in that case was false.

This fallacy is also common in marketing e.g. “Brand X vacuum cleaners are the country’s most popular brand; so buy Brand X vacuum cleaners”.  How often have we heard a salesperson try to argue that because a certain product is very popular this year, we should buy it?  Not because it is a good quality product representing value for money, but simply because it is popular?  Weren’t those ‘power balance wrist bands’ also popular before they were exposed as a sham by the ACCC?[3]

For another example, a politician might say ‘Nine out of ten of my constituents oppose the bill, therefore it is bad legislation.’  Now, this might be a political reason for voting against the bill, but it is not a valid argument that the bill is bad legislation.  To validly argue that bill is bad legislation, the politician should adduce rational arguments against the bill on its merits or lack thereof, rather than merely claim that the bill is politically unpopular.

In philosophy, truth by consensus is the process of taking statements to be true simply because people generally agree upon them.  Philosopher Nigel Warburton argues that the truth by consensus process is not a reliable way of discovering truth.  That there is general agreement upon something does not make it actually true.  There are several reasons for this.

One reason Warburton discusses is that people are prone to wishful thinking.  People can believe an assertion and espouse it as truth in the face of overwhelming evidence and facts to the contrary, simply because they wish that things were so.  Another is that people are gullible, and easily misled.

Another unreliable method of determining truth is by determining the majority opinion of a popular vote.  This is unreliable because on many questions the majority of people are ill-informed.  Warburton gives astrology as an example of this.  He states that while it may be the case that the majority of the people of the world believe that people’s destinies are wholly determined by astrological mechanisms, given that most of that majority have only sketchy and superficial knowledge of the stars in the first place, their views cannot be held to be a significant factor in determining the truth of astrology.  The fact that something ‘is generally agreed or that ‘most people believe’ something should be viewed critically, asking the question why that factor is considered to matter at all in an argument over truth.  He states that the simple fact that a majority believes something to be true is unsatisfactory justification for believing it to be true.[4]

In contrast, rational arguments that the claims of astrology are false include firstly, because they are incompatible with science; secondly, because there is no credible causal mechanism by which they could possibly be true; thirdly, because there is no empirical evidence that they are true despite objective testing; and fourthly, because the star signs used by astrologers are all out of kilter with the times of the year and have been so for the last two or three thousand years.

Another example is the claims of so-called ‘alternative medicines’ where judging by their high sales figures relative to prescription medicines, it is quite possible that a majority of the population believe these claims to be true.  Without going into details here, we skeptics have good reasons for believing that many of these claims are false.

Warburton makes a distinction between the fallacy of truth by consensus and the process of democracy in decision making.  Descriptive statements of the way things are, are either true or false – and verifiable true statements are called facts.  Normative statements deal with the way things ought to be, and are neither true nor false.  In a political context, statements of the way things ought to be are known as policies.  Political policies may be described as good or bad, but not true or false.  Democracy is preferable to other political processes not because it results in truth, but because it provides for majority rule, equal participation by multiple special-interest groups, and the avoidance of tyranny.

In summary, the Argument from Popularity fallacy confuses correlation with causality; and thus popularity with truth.  Just because most people believe that a statement is true, it does not logically follow that the statement is in fact true.  With the exception of the demonstrably false claims of astrology and so-called ‘alternative medicines’, popular statements are often more likely to be true than false (‘great minds think alike’); but they are not necessarily true and can sometimes be false.  They are certainly not true merely because they are popular.  This fallacy is purely concerned with the logical validity of arguments and the justification for the truth of propositions.  The identification of this fallacy is not an argument against democracy or whether popular political policies should or should not be pursued.

References:

Clark J. and Clark T., (2005) Humbug! The skeptic’s field guide to spotting fallacies in thinking Nifty Books, Capalaba.


[1] Clark and Clark, 2005.

[2] Feiser and Dowden et al, 2011.

[4] Warburton, 2000.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

4 Comments

Filed under Logical fallacies

Two twin fallacies

by Tim Harding

Seasoned skeptics may be familiar with two well-known logical fallacies:

  1. The ‘Argument from Personal Abuse’ or the ad hominem argument (playing the man instead of the ball); and
  2. The deductive form of the ‘Argument from Authority‘ or ‘Appeal to Authority’. (The inductive form is not necessarily a fallacy).

When you think about it, these fallacies make the same error of logic – they both draw conclusions from the character or motives of the arguer rather than the premises and form of the argument.

In informal logic, these are known as fallacies of defective induction, where it is argued that a statement is true or false because the statement is made by a person or source that is commonly regarded as authoritative or not authoritative.  The most general structure of this argument is:

   Premise 1: Source A says that statement p is true.
   Premise 2: Source A is authoritative.
   Conclusion: Therefore, statement p is true.

Conversely:

  Premise 1: Source B says that statement p is true.
  Premise 2: Source B is a ‘bloody idiot’.
  Conclusion: Therefore, statement p is false.

We skeptics are often skeptical of conspiracy theories, such as the so-called Moon Landings Hoax.  Conspiracy theories like these are often a special case of the ad hominem argument, for example:

   Premise 1: NASA claims to have landed men on the Moon;
   Premise 2: Governments can’t be trusted;
   Premise 3: NASA is a government agency;
   Conclusion: Therefore, NASA’s claim is false.

These arguments are fallacious because the truth or falsity of the claim is not necessarily related to the attributes or motives of the claimant, and because the premises can be true, and the conclusion false (an authoritative claim can turn out to be false).  If the premises can be true, but the conclusion can be false, then the argument is logically invalid. (A logically valid argument is one where if the premises are true, then the conclusion must be true by virtue of the argument’s logical structure).

An exception can be made for an ad hominem argument if the attack goes to the credibility of the arguer. For instance, the argument may depend on its presenter’s claim that he’s an expert. (That is, the ad hominem argument is undermining a legitimate Argument From Authority). Trial judges allow this category of refutation in appropriate cases.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

4 Comments

Filed under Logical fallacies

Faulty generalisation

 by Tim Harding

The tabloid media often commits a common fallacy known as Faulty Generalisation.  Other terms for this fallacy include false generalisation, hasty generalisation, over-generalisation, unscientific conclusion and even superstition. A fallacy occurs when a general rule is derived from a particular case or anecdote.

faulty generalisation shapesV2

For simplicity, this fallacy may be divided into two sub-fallacies – false generalisation and over-generalisation.

In a false generalisation, the premises of an argument are weakly related to its conclusion; but do not sufficiently justify the conclusion.  For example, a person might argue: “I don’t believe that smoking causes cancer, because my uncle Bert smoked like a chimney and yet he lived until aged 93”.  Conclusions are drawn about an entire population from too small a sample of the population – in this case a sample size of one. Contrast this with the enormous sample size of tens of thousands of smokers (plus control samples) that were used in the scientific epidemiological studies that conclusively established the causal link between smoking and cancer.  So in this case, the person committing this fallacy is giving more weight to a personal anecdote than the findings of science.

The extreme feminist slogan ‘All men are rapists’ is clearly a false generalisation.  Other claims such as ‘All men are responsible for the attitudes that lead to rape’ are a little more subtle, but are still false generalisations. Rape is a crime, like murder and bank robbing, and statistics show that only a small percentage of the population are criminals. Just because a person happens to be born of one gender, it does not make that person responsible for criminal attitudes, let alone crimes committed by other persons of the same gender.

In an over-generalisation, conclusions are drawn from an apparent trend to the entire population.  For example, if there are a couple of tragic road crashes on a weekend in which several people are killed, a senior police officer might say at a press conference something along the lines that drivers are becoming more careless.  A sample of one weekend’s road crashes is no evidence of any such trend – in fact, the long term trend is that the annual road toll is decreasing.

In technical logic terms, these are fallacies of defective induction, where the argument typically takes the following form:

   Premise: The proportion Q of the sample has attribute A.

   Conclusion: Therefore, the proportion Q of the population has attribute A.

Statistical methods are used to calculate the necessary sample size before conclusions can validly be drawn about a population.  For example, a random sample in excess of 1000 people is used in opinion polling; and even then there is a stated error margin in the order of plus or minus two per cent.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

Leave a comment

Filed under Logical fallacies

Begging the question

by Tim Harding

Setting aside for a moment whatever personal views we might have about the morality of abortion, consider the following argument:

   Premise 1: Murder is morally wrong;

   Conclusion: Therefore, abortion is morally wrong.

Is this argument logically valid?  Probably not, but let’s analyse the form of the argument to make sure:

    Premise 1: A is B;

    Conclusion: Therefore, C is B.

An argument of this form is logically invalid, i.e. the conclusion can be false even when the premises are true.  Or in other words, the conclusion does not necessarily follow from the premises.

OK, how about this argument:

   Premise 1: Abortion is murder;

   Premise 2: Murder is morally wrong;

   Conclusion: Therefore, abortion is morally wrong.

Is this argument valid?  The form of this argument is:

   Premise 1: A is B;

   Premise 2: B is C;

   Conclusion: Therefore, A is C.

In this second case, the conclusion necessarily follows from the premises. That is, if the premises are true (which they may or may not be), then the conclusion must be true by virtue of the logical structure of the argument. So this form of this argument is logically valid. However, if one or more of the premises is false, then the conclusion may also be false (even though the argument is valid). This example also illustrates the importance difference between validity and truth.

The first argument was missing a premise, which when included, turned the argument from an invalid one to a valid one.  This is an instance of the formal fallacy known as the Fallacy of the Unstated Major Premise or ‘Begging the Question’.

The Latin name for it is petitio principia, meaning a request for the beginning or premise’.  Or in other words, this fallacy is committed when one makes an argument assuming a premise that is not explicitly stated.  It is not to be confused with the meaning of ‘raising the question’, which is sometimes mistakenly referred to as ‘begging the question’ in the popular media.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

Leave a comment

Filed under Logical fallacies

Rationality and truth

by Tim Harding

Rationality is the state or quality of being rational, which means being consistent with or based on or using reason.  Reason is thought by rationalists and skeptics to be more reliable in determining what is true; in contrast to reliance on factors such as emotion, intuition, instinct, authoritytradition, mysticismsuperstitionfaith or arbitrary choice.

Harvard philosophy professor Robert Nozick has proposed two criteria for rational belief:

  1. support by reasons that make the belief credible (e.g. scientific evidence); and
  2. generation by a process that reliably produces true beliefs (e.g. the scientific method).[1]

For instance, until early December 2010, science told us that the element arsenic is toxic to all life on Earth, in even very small concentrations.  But then NASA announced that scientists had discovered a microorganism in California’s Mono Lake able to thrive and reproduce using arsenic instead of phosphorus in its biochemistry.[2] In terms of Nozick’s criteria, it was rational until December 2010 to believe that arsenic is toxic to all life on Earth, even though we now know that the belief was false.  Was it rational to hold this belief after the NASA announcement?  Using the same criteria, our answer would be ‘no’.

A statement is true when it represents how things are; true statements are ones that correctly describe reality; true statements correspond to the way the world really is. But as we have seen, a rational belief is not necessarily true.  Conversely, an irrational belief is not necessarily false.  For example, a prediction made by a psychic can turn out to be true by coincidence.  On the other hand, a rational belief needs to be reasonable or credible in the circumstances; that is, a rational belief is one that is justified by reason.

What we can say that is because an irrational belief is unreliable and more likely to be false than a rational belief, we should therefore be more skeptical about beliefs that are known to be or appear to be irrational than about rational beliefs.

References

[1] Nozick, R. (1993) The Nature of Rationality, Princeton University Press, Princeton.

[2] http://www.nasa.gov/topics/universe/features/astrobiology_toxic_chemical.html

If you find the information on this blog useful, you might like to make a donation.

Make a Donation Button

Leave a comment

Filed under Essays and talks