Tag Archives: reasoning

What exactly is the scientific method and why do so many people get it wrong?

The Conversation

Peter Ellerton, The University of Queensland

Claims that the “the science isn’t settled” with regard to climate change are symptomatic of a large body of ignorance about how science works.

So what is the scientific method, and why do so many people, sometimes including those trained in science, get it so wrong?

The first thing to understand is that there is no one method in science, no one way of doing things. This is intimately connected with how we reason in general.

Science and reasoning

Humans have two primary modes of reasoning: deduction and induction. When we reason deductively, we tease out the implications of information already available to us.

For example, if I tell you that Will is between the ages of Cate and Abby, and that Abby is older than Cate, you can deduce that Will must be older than Cate.

That answer was embedded in the problem, you just had to untangle it from what you already knew. This is how Sudoku puzzles work. Deduction is also the reasoning we use in mathematics.

Inductive reasoning goes beyond the information contained in what we already know and can extend our knowledge into new areas. We induce using generalisations and analogies.

Generalisations include observing regularities in nature and imagining they are everywhere uniform – this is, in part, how we create the so-called laws of nature.

Generalisations also create classes of things, such as “mammals” or “electrons”. We also generalise to define aspects of human behaviour, including psychological tendencies and economic trends.

Analogies make claims of similarities between two things, and extend this to make new knowledge.

For example, if I find a fossilised skull of an extinct animal that has sharp teeth, I might wonder what it ate. I look for animals alive today that have sharp teeth and notice they are carnivores.

Reasoning by analogy, I conclude that the animal was also a carnivore.

Using induction and inferring to the best possible explanation consistent with the evidence, science teaches us more about the world than we could simply deduce.

Saber tooth cat skull: just look at the fangs. Flickr/Badlands National Park

Science and uncertainty

Most of our theories or models are inductive analogies with the world, or parts of it.

If inputs to my particular theory produce outputs that match those of the real world, I consider it a good analogy, and therefore a good theory. If it doesn’t match, then I must reject it, or refine or redesign the theory to make it more analogous.

If I get many results of the same kind over time and space, I might generalise to a conclusion. But no amount of success can prove me right. Each confirming instance only increases my confidence in my idea. As Albert Einstein famously said:

No amount of experimentation can ever prove me right; a single experiment can prove me wrong.

Einstein’s general and special theories of relativity (which are models and therefore analogies of how he thought the universe works) have been supported by experimental evidence many times under many conditions.

We have great confidence in the theories as good descriptions of reality. But they cannot be proved correct, because proof is a creature that belongs to deduction.

The hypothetico-deductive method

Science also works deductively through the hypothetico-deductive method.

It goes like this. I have a hypothesis or model that predicts that X will occur under certain experimental conditions. Experimentally, X does not occur under those conditions. I can deduce, therefore, that the theory is flawed (assuming, of course, we trust the experimental conditions that produced not-X).

Under these conditions, I have proved that my hypothesis or model is incorrect (or at least incomplete). I reasoned deductively to do so.

But if X does occur, that does not mean I am correct, it just means that the experiment did not show my idea to be false. I now have increased confidence that I am correct, but I can’t be sure.

If one day experimental evidence that was beyond doubt was to go against Einstein’s predictions, we could deductively prove, through the hypothetico-deductive method, that his theories are incorrect or incomplete. But no number of confirming instances can prove he is right.

That an idea can be tested by experiment, that there can be experimental outcomes (in principle) that show the idea is incorrect, is what makes it a scientific one, at least according to the philosopher of science Karl Popper.

As an example of an untestable, and hence unscientific position, take that held by Australian climate denialist and One Nation Senator Malcolm Roberts. Roberts maintains there is no empirical evidence of human-induced climate change.

When presented with authoritative evidence during an episode of the ABC’S Q&A television debating show recently, he claimed that the evidence was corrupted.

Professor Brian Cox explains climate science to senator Malcolm Roberts.

Yet his claim that human-induced climate change is not occurring cannot be put to the test as he would not accept any data showing him wrong. He is therefore not acting scientifically. He is indulging in pseudoscience.

Settled does not mean proved

One of the great errors in the public understanding of science is to equate settled with proved. While Einstein’s theories are “settled”, they are not proved. But to plan for them not to work would be utter folly.

As the philosopher John Dewey pointed out in his book Logic: The Theory of Inquiry:

In scientific inquiry, the criterion of what is taken to be settled, or to be knowledge, is [of the science] being so settled that it is available as a resource in further inquiry; not being settled in such a way as not to be subject to revision in further inquiry.

Those who demand the science be “settled” before we take action are seeking deductive certainty where we are working inductively. And there are other sources of confusion.

One is that simple statements about cause and effect are rare since nature is complex. For example, a theory might predict that X will cause Y, but that Y will be mitigated by the presence of Z and not occur at all if Q is above a critical level. To reduce this to the simple statement “X causes Y” is naive.

Another is that even though some broad ideas may be settled, the details remain a source of lively debate. For example, that evolution has occurred is certainly settled by any rational account. But some details of how natural selection operates are still being fleshed out.

To confuse the details of natural selection with the fact of evolution is highly analogous to quibbles about dates and exact temperatures from modelling and researching climate change when it is very clear that the planet is warming in general.

When our theories are successful at predicting outcomes, and form a web of higher level theories that are themselves successful, we have a strong case for grounding our actions in them.

The mark of intelligence is to progress in an uncertain world and the science of climate change, of human health and of the ecology of our planet has given us orders of magnitude more confidence than we need to act with certitude.

Demanding deductive certainty before committing to action does not make us strong, it paralyses us.

The ConversationPeter Ellerton, Lecturer in Critical Thinking, The University of Queensland

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.
 

Leave a comment

Filed under Reblogs

Isaac Asimov on evidence

10250296_973809669306711_1611802515833927345_n

 

Leave a comment

Filed under Quotations

Thomas Paine on useless arguments

1483441_10152814578141605_6743678732926088811_n
 

Leave a comment

Filed under Quotations

No, you’re not entitled to your opinion

The Conversation

By Patrick Stokes, Deakin University

Every year, I try to do at least two things with my students at least once. First, I make a point of addressing them as “philosophers” – a bit cheesy, but hopefully it encourages active learning.

Secondly, I say something like this: “I’m sure you’ve heard the expression ‘everyone is entitled to their opinion.’ Perhaps you’ve even said it yourself, maybe to head off an argument or bring one to a close. Well, as soon as you walk into this room, it’s no longer true. You are not entitled to your opinion. You are only entitled to what you can argue for.”

A bit harsh? Perhaps, but philosophy teachers owe it to our students to teach them how to construct and defend an argument – and to recognize when a belief has become indefensible.

The problem with “I’m entitled to my opinion” is that, all too often, it’s used to shelter beliefs that should have been abandoned. It becomes shorthand for “I can say or think whatever I like” – and by extension, continuing to argue is somehow disrespectful. And this attitude feeds, I suggest, into the false equivalence between experts and non-experts that is an increasingly pernicious feature of our public discourse.

Firstly, what’s an opinion?

Plato distinguished between opinion or common belief (doxa) and certain knowledge, and that’s still a workable distinction today: unlike “1+1=2” or “there are no square circles,” an opinion has a degree of subjectivity and uncertainty to it. But “opinion” ranges from tastes or preferences, through views about questions that concern most people such as prudence or politics, to views grounded in technical expertise, such as legal or scientific opinions.

You can’t really argue about the first kind of opinion. I’d be silly to insist that you’re wrong to think strawberry ice cream is better than chocolate. The problem is that sometimes we implicitly seem to take opinions of the second and even the third sort to be unarguable in the way questions of taste are. Perhaps that’s one reason (no doubt there are others) why enthusiastic amateurs think they’re entitled to disagree with climate scientists and immunologists and have their views “respected.”

Meryl Dorey is the leader of the Australian Vaccination Network, which despite the name is vehemently anti-vaccine. Ms. Dorey has no medical qualifications, but argues that if Bob Brown is allowed to comment on nuclear power despite not being a scientist, she should be allowed to comment on vaccines. But no-one assumes Dr. Brown is an authority on the physics of nuclear fission; his job is to comment on the policy responses to the science, not the science itself.

So what does it mean to be “entitled” to an opinion?

If “Everyone’s entitled to their opinion” just means no-one has the right to stop people thinking and saying whatever they want, then the statement is true, but fairly trivial. No one can stop you saying that vaccines cause autism, no matter how many times that claim has been disproven.

But if ‘entitled to an opinion’ means ‘entitled to have your views treated as serious candidates for the truth’ then it’s pretty clearly false. And this too is a distinction that tends to get blurred.

On Monday, the ABC’s Mediawatch program took WIN-TV Wollongong to task for running a story on a measles outbreak which included comment from – you guessed it – Meryl Dorey. In a response to a viewer complaint, WIN said that the story was “accurate, fair and balanced and presented the views of the medical practitioners and of the choice groups.” But this implies an equal right to be heard on a matter in which only one of the two parties has the relevant expertise. Again, if this was about policy responses to science, this would be reasonable. But the so-called “debate” here is about the science itself, and the “choice groups” simply don’t have a claim on air time if that’s where the disagreement is supposed to lie.[1]

Mediawatch host Jonathan Holmes was considerably more blunt: “there’s evidence, and there’s bulldust,” and it’s no part of a reporter’s job to give bulldust equal time with serious expertise.

The response from anti-vaccination voices was predictable. On the Mediawatch site, Ms. Dorey accused the ABC of “openly calling for censorship of a scientific debate.” This response confuses not having your views taken seriously with not being allowed to hold or express those views at all – or to borrow a phrase from Andrew Brown, it “confuses losing an argument with losing the right to argue.” Again, two senses of “entitlement” to an opinion are being conflated here.

So next time you hear someone declare they’re entitled to their opinion, ask them why they think that. Chances are, if nothing else, you’ll end up having a more enjoyable conversation that way.

Read more from Patrick Stokes: The ethics of bravery

The ConversationPatrick Stokes does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

Reblogger’s note: 

[1] This is a fallacy known as false balance.

2 Comments

Filed under Reblogs

The Red Herring Fallacy

The idiom ‘red herring’ is used to refer to something that misleads or distracts from the relevant or important issue.  The expression is mainly used to assert that an argument is not relevant to the issue being discussed.

A red herring fallacy is an error in logic where a proposition is, or is intended to be, misleading in order to make irrelevant or false inferences. It includes any logical inference based on fake arguments, intended to replace the lack of real arguments or to replace implicitly the subject of the discussion.  In this way, a red herring is as much a debating tactic as it is a logical fallacy.  It is a fallacy of distraction, and is committed when a listener attempts to divert an arguer from his argument by introducing another topic.  Such arguments have the following form:

Topic A is under discussion.

Topic B is introduced under the guise of being relevant to topic A (when topic B is actually not relevant to topic A).

Topic A is abandoned.

This sort of reasoning is fallacious because merely changing the topic of discussion hardly counts as an argument against a claim.

For instance, ‘I’m entitled to my opinion’ or ‘I have a right to my opinion’ is a common declaration in rhetoric or debate that can be made at some point in a discussion. Whether one has a particular entitlement or right is irrelevant to whether one’s opinion is true or false. To assert the existence of the right is a failure to assert any justification for the opinion.

As an informal fallacy, the red herring falls into a broad class of relevance fallacies. Unlike the strawman fallacy, which is premised on a distortion of the other party’s position, the red herring is a seemingly plausible, though ultimately irrelevant, diversionary tactic.  According to the Oxford English Dictionary, a red herring may be intentional or unintentional – it does not necessarily mean a conscious intent to mislead.

Source: Wikimedia Commons

Source: Wikimedia Commons

Conventional wisdom has long supposed the origin of the idiom ‘red herring’ to be the use of a kipper (a strong-smelling smoked fish) to train hounds to follow a scent, or to divert them from the correct route when hunting; however, modern linguistic research suggests that the term was probably invented in 1807 by English polemicist William Cobbett, referring to one occasion on which he had supposedly used a kipper to divert hounds from chasing a hare, and was never an actual practice of hunters.  The phrase was later borrowed to provide a formal name for the logical fallacy and associated literary device.

Although Cobbett most famously mentioned it, he was not the first to consider red herring for scenting hounds; an earlier reference occurs in the pamphlet ‘Nashe’s Lenten Stuffe’, published in 1599 by the Elizabethan writer Thomas Nashe, in which he says ‘Next, to draw on hounds to a scent, to a red herring skin there is nothing comparable’.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

Leave a comment

Filed under Logical fallacies

The Gricean Maxims

by Tim Harding

There are certain social conventions and assumptions that are normally made by people engaged in meaningful conversations.   Vocabulary and the rules of grammar combine with knowledge of the situational context to fill in what’s missing and resolve ambiguities.  For example, when we ask at the dinner table whether somebody can pass the salt, we are not literally enquiring as to their physical ability to lift and move the salt container.[1][2]

Listeners and speakers need to cooperate with each other and to mutually accept one another to be understood in a particular way.   In sociolinguistics, this is known as the  Cooperative Principle.  As phrased by British philosopher Paul Grice, who introduced it, the principle states,

‘Make your contribution such as it is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged.[3]

Though phrased as a prescriptive command, the principle is intended as a description of how people normally behave in conversation, to ensure that what they say in a conversation furthers the purpose of that conversation.  The principle describes the assumptions listeners normally make about the way cooperative speakers will talk.

Thus the cooperative principle works both ways: speakers (generally) observe the cooperative principle, and listeners (generally) assume that speakers are observing it.  This allows for the possibility of implicatures, which are meanings that are not explicitly conveyed in what is said, but that can nonetheless be inferred.  For example, if Alice points out that Bill is not present, and Carol replies that Bill has a cold, then there is an implicature that the cold is the reason, or at least a possible reason, for Bill’s absence.  This is because Carol’s comment is not cooperative — does not contribute to the conversation — unless her point is that Bill’s cold is or might be the reason for his absence (see the Maxim of Relevance below).  If Bill’s cold had nothing to do with his absence, then Carol’s comment would be irrelevant, misleading and thus uncooperative to the conversation.

44lnsm6j5zxex7fyzuugad6cheadw6rhlm5vs2oll757hbaoaxlq_0_0

The cooperative principle can be divided into four maxims, called the Gricean Maxims, describing specific rational subordinate principles observed by people who adhere to the overarching cooperative principle.  Grice proposed four conversational maxims that arise from the pragmatics of natural language.  The Gricean Maxims are a way to explain the link between utterances and what is understood from them.

Maxim of Quality

  1. Do not say what you believe to be false.
  2. Do not say that for which you lack adequate evidence.

Maxim of Quantity

  1. Make your contribution as informative as is required (for the current purposes of the exchange).
  2. Do not make your contribution more informative than is required.

Maxim of Relation

  1. Be relevant.

Maxim of Manner

  1. Avoid obscurity of expression.
  2. Avoid ambiguity.
  3. Be brief (avoid unnecessary prolixity).
  4. Be orderly.

Without cooperation, human interaction would be far more difficult and counterproductive. Therefore, the Cooperative Principle and the Gricean Maxims are not specific to conversation but to verbal interactions in general. For example, it would not make sense to reply to a question about the weather with an answer about groceries because it would violate the Maxim of Relevance. Likewise, responding to a simple yes/no question with a long monologue would violate the Maxim of Quantity.

However, it is possible to flout a maxim intentionally or unconsciously and thereby convey a different meaning than what is literally spoken. Many times in conversation, this flouting is manipulated by a speaker to produce a negative pragmatic effect, as with sarcasm or irony, or to convey a meaning by what is not said in the situational context. For example, a student named Luisa Casati has asked her tutor Jeremy Hirst to write a letter of recommendation.  The letter reads as follows;

‘Dear Colleague,

Ms. Luisa Casati has asked me to write a letter on her behalf. Let me say that Ms. Casati is unfailingly polite, is neatly dressed at all times, and is always on time for her classes.

Sincerely yours,

Jeremy Hirst’

Jeremy has violated the Maxim of Quantity by providing insufficient information as to Luisa’s suitability for further study or employment.  He has also violated the Maximum of Relevance by discussing some of her positive personal qualities, but which are not centrally relevant to her abilities as a student or employee.

Jeremy might have deliberately violated these two maxims in an attempt to be truthful whilst not hurting Luisa’s feelings (so as not to violate the maxims of quality or manner).  He may thus be conveying a subtle negative message to the reader by the nature of what he has left out of the text rather than what he has included.

Assuming that Jeremy’s letter is rational and purposeful, then in my view it does not disprove the maxims in question.  Indeed, by deliberately violating these maxims, the letter may well be conveying a subtle negative meaning that might not be conveyed if these maxims did not exist.  That is, if the Gricean maxims did not exist, and the letter was read literally and simply, then the letter may convey only a positive message that Jeremy might not really intend.

Speakers who deliberately flout the maxims usually intend for their listener to understand their underlying implicature. Conversationalists can assume that when speakers intentionally flout a maxim, they still do so with the aim of expressing some thought.  Thus, the Gricean Maxims serve a purpose both when they are followed and when they are flouted.

References and notes

[1] Fromkin, V., Rodman, R., Hyams, N., Collins, P., and Amberber, M., (2009) An  Introduction to Language (6th edition) South Melbourne: Cengage Learning. pp. 196-7.

[2] Indeed, it is thought that one of the main limitations to artificial intelligence is that machines are likely to interpret language too literally, unless they have been programmed with all the knowledge of situational context that humans accumulate over a lifetime.

[3] Grice, Paul (1975). ‘Logic and conversation’. In Cole, P.; Morgan, J. Syntax and semantics. 3: Speech acts. New York: Academic Press. pp. 41–58.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

Leave a comment

Filed under Essays and talks

Boy/girl hair solution

They both lied.

The child with the black hair is the girl, and the child with the white hair is the boy.

(If only one lied they would both be boys or both be girls)

Leave a comment

Filed under Puzzles

The Paradox of Thrift

The paradox of thrift (or paradox of saving) is a paradox of economics generally attributed to John Maynard Keynes, although it had been stated as early as 1714 in The Fable of the Bees and similar sentiments dating to antiquity.

Keynes argued that consumer spending contributes to the collective good, because one person’s spending is another person’s income.  Thus, when individuals save too much instead of spending, they can cause collective harm because businesses do not earn as much and have to lay off employees who are then unable to save.  The paradox is that total savings may fall even when individual savings attempt to rise.  In this way, individual savings rather than spending can worsen a recession, and therefore be collectively harmful to the economy.

Consider the following example:

thrift boxes

In the above example, one consumer increased his savings by $100, but this cause no net increase in total savings.  Increased savings reduced income for other economic participants, forcing them to cut their savings. In the end, no new savings was generated while $200 income was lost.

This paradox is related to the fallacy of composition, which falsely concludes what is true of the parts must be true of the whole.  It also represents a prisoner’s dilemma, because saving is beneficial to each individual but deleterious to the general population.

The paradox of thrift is a central component of Keynesian economics, and has formed part of mainstream economics since the late 1940s, though it is disputed on a number of grounds by non-Keynesian economists such as Friedrich Hayek.  One of the main arguments against the paradox of thrift is that when people increase savings in a bank, the bank has more money to lend, which will generally decrease interest rates and thus spur lending and spending.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

 

Leave a comment

Filed under Paradoxes

Buridan’s Ass

Buridan’s Ass is the name give to an apparent paradox related to the free will paradox; although there is some debate amongst philosophers as to whether it actually is a paradox (see below).

The paradox is named after the French priest and philosopher Jean Buridan (c.1300-1358CE), who studied under William of Ockham.  It refers to a hypothetical situation where a donkey finds itself exactly halfway between two equally big and delicious bales of hay.  There is no way of distinguishing between these two bales – they appear to be identical.  Because the donkey lacks a reason or cause to choose one over the other, it cannot decide which one to eat, and so starves to death. This tale is usually taken as demonstrating that there is no free will.

The corollary to this argument is that if the donkey eats one of the bales of hay, then the donkey is making a choice.  If the donkey is making a choice, then it must have free will, because there is no causal mechanism to make it choose one bale over another. And if donkeys have free will, then so must humans.

Deliberations_of_Congress

Deliberations of Congress (Source: Wikimedia Commons)

The paradox actually predates Buridan – it dates to antiquity, being found in Aristotle’s On the Heavens.[2] Aristotle, in ridiculing the Sophist idea that the Earth is stationary simply because it is circular and any forces on it must be equal in all directions, says that is as ridiculous as saying that:

…a man, being just as hungry as thirsty, and placed in between food and drink, must necessarily remain where he is and starve to death.  — Aristotle, On the Heavens, (c.350 BCE)

The 12th century Persian Islamic scholar and philosopher Al-Ghazali discusses the application of this paradox to human decision-making, asking whether it is possible to make a choice between equally good courses without grounds for preference.  He takes the attitude that free will can break the stalemate.

Suppose two similar dates in front of a man, who has a strong desire for them but who is unable to take them both. Surely he will take one of them, through a quality in him, the nature of which is to differentiate between two similar things. — Abu Hamid al-Ghazali, The Incoherence of the Philosophers (c.1100CE)

Professor Hauskeller of Exeter University takes a scientifically sceptical view of this paradox, using the donkey scenario:

If we could find a donkey which was dumb enough to starve between two piles of hay, we would have evidence against free will, at least as far as donkeys are concerned (or at least that particular donkey). But that’s not very likely. No matter how artfully we arrange the situation, a donkey will not hesitate very long, if at all, and will soon choose one of the piles of hay. He doesn’t care which, and he certainly won’t starve. However, even if we conducted thousands of experiments like this, and no donkey ever starved, we would still not have proved the existence of free will, because the reason no donkey ever starves in front of two equally attractive piles of hay may simply be that those piles aren’t really equally attractive. Perhaps in real life there aren’t any situations where the weighted reasons for a choice are equal.[1]

So Hauskeller’s suggested solution to the paradox is that the piles of hay are not equal in practice – the donkey detects a slight difference which causes it to choose one pile over the other. This solution is not very convincing when one considers the hypothetical possibility of the two piles of hay being exactly equal in appearance. So it seems that we still have a problem here.

Some proponents of hard determinism have acknowledged the difficulty the scenario creates for determinism, but have denied that it illustrates a true paradox, since a deterministic donkey could recognize that both choices are equally good and arbitrarily (randomly) pick one instead of starving. For example, there are deterministic machines that can generate random numbers, although there is some dispute as to whether such numbers are truly random.

References:

[1] Hauskeller, M. (2010) Why Buridan’s Ass Doesn’t Starve Philosophy Now, London. http://philosophynow.org/issues/81/Why_Buridans_Ass_Doesnt_Starve

[2] Rescher, N. (2005). Cosmos and Logos: Studies in Greek Philosophy . Ontos Verlag. pp. 93–99.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

6 Comments

Filed under Paradoxes

Introduction

Welcome to Tim Harding’s blog of writings and talks about logic, rationality, philosophy and skepticism. There are also some reblogs of some of Tim’s favourite posts by other writers, plus some of his favourite quotations and videos This blog has a Facebook connection at The Logical Place.

There are over 2,300 posts here about all sorts of topics – please have a good look around before leaving.

If you are looking for an article about Skepticism, Science and Scientism published in The Skeptic magazine titled ”A Step Too Far?’, it is available here.

If you are looking for an article about the Birth of Experimental Science published in The Skeptic magazine titled ‘Out of the Dark’, it is available here.

If you are looking for an article about the Dark Ages published in The Skeptic magazine titled ‘In the Dark’, it is available here.

If you are looking for an article about the Traditional Chinese Medicine vs. Endangered Species published in The Skeptic magazine titled ‘Bad Medicine’, it is available here.

If you are looking for an article about the rejection of expertise published in The Skeptic magazine titled ‘Who needs to Know?’, it is available here.

If you are looking for an article about Charles Darwin published in The Skeptic magazine titled ‘Darwin’s Missing Link“, it is available here.

If you are looking for an article about the Astronomical Renaissance published in The Skeptic magazine titled ‘Rebirth of the Universe‘, it is available here.

If you are looking for an article about DNA and GM foods published in The Skeptic magazine titled ‘The Good Oil‘, it is available here.

If you are looking for an article about animal welfare published in The Skeptic magazine titled ‘Creature Features‘, it is available here.

If you would like to submit a comment about anything written here, please read our comments policy.

Follow me on Academia.edu

Copyright notice: © All rights reserved. Except for personal use or as permitted under the Australian Copyright Act, no part of this website may be reproduced, stored in a retrieval system, communicated or transmitted in any form or by any means without prior written permission (except as an authorised reblog). All inquiries should be made to the copyright owner, Tim Harding at tim.harding@yandoo.com, or as attributed on individual blog posts.

If you find the information on this blog useful, you might like to consider supporting us. Make a Donation Button

3 Comments

Filed under Uncategorized