Tag Archives: consciousness

Did Descartes think that animals have feelings?

by Tim Harding

It is a common misconception that Descartes held the view that because animals cannot think, they have no feelings and do not suffer pain.  In 1952, this view was described by the Scottish philosopher and psychologist Norman Kemp Smith as a ‘monstrous thesis’ (Cottingham 1978: 554-556).  In this essay, I intend to examine two questions – firstly, whether Descartes actually held this view and secondly, whether this view is entailed by his other views about animal minds.  My answer is essentially that whilst the text references are somewhat unclear on this specific point, it is unlikely that Descartes held this view or that it was entailed by his other related views.


Rene Descartes (1596 – 1650CE)

Part of the problem in discussing these questions is a lack of clarity amongst Descartes’ objectors (and even Descartes himself) in the meanings of key terms such as ‘consciousness’, ‘self-consciousness’, ‘thought’, ‘awareness’, ‘feelings’ and ‘sensations’.  In an attempt to clarify the issues, Cottingham (1978: 551) helpfully suggests that the views attributed to Descartes be broken down in to a number of distinct propositions:

(1)  Animals are machines.

(2)  Animals are automata.

(3)  Animals do not think.

(4)  Animals have no language.

(5)  Animals have no self-consciousness.

(6)  Animals have no consciousness.

(7)  Animals are totally without feeling.

Cottingham (1978: 552) argues that whilst Descartes advocated propositions (1) to (6), there is no evidence that he supported Proposition (7).  Nor is Proposition (7) entailed by the earlier propositions (Cottingham 1978: 554-556).  I will return to Proposition (7) later, after I have discussed the definitions of some key terms and the earlier propositions.

Proposition (1) is not asserted by Descartes in this explicit form; but Cottingham (1978: 552) argues that this is what Descartes means in Part V of his Discourse on Method, where he says that the body may be regarded ‘like a machine..’.  It is important to note that for Descartes, the human body is a machine in the same sense as an animal body.  This view is part of Descartes’ general scientific ‘mechanism’ where all animal behaviour is explainable in terms of physiological laws (Cottingham 1978: 552).

The definition of ‘automaton’ in Proposition (2) is significant, as it has led to some confusion in the descriptions of Descartes’ views.  Cottingham (1978: 553) argues that the primary Webster dictionary definition of ‘automaton’ is ‘a machine that is relatively self-operating’ (which is the Ancient Greek meaning of the ‘auto’ prefix).  It does not entail the absence or incapability of feeling, as some of Descartes’ critics have alleged (Cottingham 1978: 553).  What Descartes is saying is that the complex sequence of movements of machines, such as the moving statues found at the time in some of the royal fountains, could all be explained in terms of internal mechanisms such as cogs, levers and the like.  Descartes’ point here is that the mere complexity of animal movements is no more a bar to explanation of their behaviour than is the case with the movements of these fountain statues (Cottingham 1978: 553).

Regarding Proposition (3), a crucial and central difference between animals and human beings for Descartes is that animals do not think.  In a letter to the English philosopher Henry More dated 5 February 1649, Descartes says that ‘there is no prejudice to which we are all more accustomed from our earliest years than the belief that the dumb animals think’.  He also says that they do not have a mind; they lack reason; and they do not have a rational soul (Cottingham 1978: 554).  Descartes defined ‘thought’ in his Second Replies to the Meditations as follows: ‘Thought is a word that covers everything that exists in us in such a way that we are immediately conscious of it. Thus all the operations of will, intellect, imagination, and of the senses are thoughts’ (Radner and Radner, 1989: 22).  Descartes’ inclusion of the senses in this definition is ambiguous, as I will discuss later.

For Descartes, Proposition (3) is entailed by Proposition (4) claiming the absence of language in animals.  In a letter to the Marquess of Newcastle dated 23 November 1646, Descartes makes the point that the utterances of animals are never what the modern linguist Chomsky calls ‘stimulus free’ – they are always geared to and elicited by external factors (Cottingham 1978: 555; Radner and Radner, 1989: 41).  Descartes explains in his letter that the words of parrots do not count as language because they are not ‘relevant’ to the particular situation.  By contrast, even the ravings of insane persons are ‘relevant to particular topics’ though they do not ‘follow reason’ (Radner and Radner, 1989: 45).  This brings us to what is known as Descartes’ ‘language test’ – the ability to put words together in different ways that are appropriate to a wide variety of situations (Radner and Radner, 1989: 41).

In an attempt to overcome certain objections and counter examples, Descartes later modifies his language test to claim that animals never communicate anything pertaining to ‘pure thought’, which he means unaccompanied by any corporeal process or functions of the body (Radner and Radner, 1989: 48).  This modification is what is known as Descartes’ ‘action test’, which has been stated by Radner and Radner (1989: 50) as:

‘In order to determine whether a creature of type A is acting through reason, you compare its performance with that of creatures that do act through reason.  If A’s performance falls short of B’s, where B is a creature that acts through reason, then A does not act through reason but only from the disposition of its organs.  The B always stands for human beings because they are the only beings known for sure to have reason.  Only in the human case do we have direct access to the reasoning process.’

As for Propositions (5) and (6), whilst Descartes provides an explicit definition of ‘thought’ he does not offer one of ‘consciousness’, let alone ‘self-consciousness’ (Radner and Radner, 1989: 22-25).  Yet he inextricably links thought to consciousness in the Fourth Replies when he says ‘we cannot have any thought of which we are not aware at the very moment when it is in us’.  This implies that for Descartes, consciousness is not the act of thinking, but our awareness of our acts of thinking (Radner and Radner, 1989: 22-25).  This raises some complex issues regarding an infinite regression of thoughts (Radner and Radner, 1989: 22-25); but I need not discuss those issues for my current purposes.   Radner and Radner (1989: 30) suggest that self-consciousness is not necessarily the same thing as consciousness. It is the awareness of self, that it is one’s self that is having conscious thoughts.

With respect to Proposition (7), Cottingham (1978: 556-557) argues that Descartes did not commit himself to the view that animals do not have feelings or sensations.  He quotes from Descartes 1649 letter to More, where he says that the sounds made by livestock and companion animals are not genuine language , but are ways of ‘communicating to us…their natural impulses of anger, fear, hunger and so on’.  In the same letter, Descartes writes: ‘I should like to stress that I am talking of thought, not of…sensation; for …I deny sensation to no animal, in so far as it depends on a bodily organ.  Cottingham also quotes from Descartes 1646 letter to Newcastle, where he wrote: ‘If you teach a magpie to say good-day to its mistress when it sees her coming, all you can possibly have done is to make the emitting of this word the expression of one of its feelings.’  In other words, Descartes denies in these letters that animals think, but not that they feel (Cottingham 1978: 557).

Notwithstanding the apparent vindication of Descartes in the text of these letters, Cottingham (1978: 557) next argues that Proposition (7) is consistent with Descartes dualism.  Since an animal has no mind or soul, it follows that that it must belong wholly in the extended divisible world of corporeal substances.  Cottingham (1978: 557) thinks that this must be the authentic Cartesian position, presumably because the central importance of dualism to Cartesian metaphysics.  On the other hand, I would argue that a lack of Cartesian thought does not entail a lack of feeling or sensation, as I discuss under Proposition (3) below.

The next question to consider is whether any of Propositions (1) to (6) are true; and if so, whether Proposition (7) is entailed by any of these earlier propositions that are true.

With respect to Proposition (1) I would argue that if the human body is a machine and humans have feelings, then it does not follow from this proposition alone that because animals are machines, they do not have feelings.  Similarly, even if Proposition (2) is true, it does not follow from the definition of automaton that animals do not have feelings either (Cottingham 1978: 553).

Proposition (3) is probably the area of greatest contention.  Radner and Radner (1989: 13) cite empirical evidence as far back as Aristotle indicating at least the possibility of thought by animals.  Aristotle cites the nest-building behaviour of swallows, where they mix mud and chaff.  If they run short of mud, they douse themselves with water and roll in the dust.  He also reports that a mother nightingale has been observed to give singing lessons to her young (Radner and Radner, 1989: 13).  More recently, there is a video on YouTube of a mother Labrador teaching her puppy how to go down stairs.[1]  There is another video of a crow solving a complex puzzle that most human children would have difficulty with.[2] Whilst nest building and singing teaching are arguably instinctive bird behaviours, dogs teaching puppies about stairs and crows solving complex puzzles are less likely to be instinctive.  They indicate the possibility of animals planning things in their minds.

Cottingham argues that even if Proposition (3) is true, it does not follow that Descartes is committed to a position that animals do not have feelings.  This is because Descartes separates feelings and sensations from thinking – for example a level of feeling or sensation that fall short of reflective awareness (Cottingham 1978: 555-556).  Radner and Radner suggest that the word ‘sensation’ is ambiguous for Descartes.  On the one hand, it could refer to the corporeal process of the transmission of nerve impulses to the brain; yet on the other hand it can also refer to the mental awareness that is associated with the corporeal process (Radner and Radner 1989: 22).

Another area of contention is in relation to Proposition (4).  Gassendi objected that Descartes was being unfair to animals in judging ‘language’ in only human terms.  He suggested that animals could have languages of their own that we do not understand (Radner and Radner 1989: 45).  I would add that human sign language illustrates that language need not be exclusively vocal.  Radner and Radner suggest that the natural cries and gestures of animals can be appropriate to the situation and can communicate useful information to other animals.  For example, a Thomson’s gazelle, seeing a predator lurking in the distance, assumes an alert posture and gives a short snort.  The other gazelles within hearing distance immediately stop grazing and look in the same direction.  The message is not just ‘I’m scared’ but it conveys a warning to look up and over in this direction (Radner and Radner 1989: 45).

Thomson's gazelles

Thomson’s gazelles

Radner and Radner (1989: 102-103) argue that neither the language test nor the action test lead to the conclusion that animals lack consciousness.  Either animals pass the language test or it is not a test of thought in the Cartesian sense.  The Radners argue that even if we were to grant that action test shows that animals fail to act through reason it still does not establish that they lack all modes of Cartesian thought (Radner and Radner 1989: 103).  I would also argue that Descartes modification of the language test to an ‘action test’ results in a proposition similar to Proposition (3) about thinking which I have already discussed.

In conclusion, I have tried to clarify the various propositions and key terms involved in the allegation that Descartes believed that animals do have feelings or sensations.  I have supported Cottingham’s view that the relevant texts by Descartes do not substantiate this allegation.  I have also supported Cottingham’s view that Propositions (1) to (6) do not entail Proposition (7), including by the use of some recent empirical evidence.  However, I do not support Cottingham’s view that Descartes’ dualism is inconsistent with his views about animal minds.


Cottingham, J., ‘A Brute to the Brutes?  Descartes’ Treatment of Animals’, Philosophy 53 (1978), pp. 551-59.

Radner, D., and Radner, M., (1989) Animal Consciousness. Buffalo, Prometheus Books.

[1] https://www.youtube.com/watch?v=Ht5dFBMgOGs

[2] https://www.youtube.com/watch?v=uNHPh8TEAXM

Leave a comment

Filed under Essays and talks

What makes us conscious?

The Conversation

Matthew Davidson, Monash University

Do you think that the machine you are reading this story on, right now, has a feeling of “what it is like” to be in its state?

What about a pet dog? Does it have a sense of what it’s like to be in its state? It may pine for attention, and appear to have a unique subjective experience, but what separates the two cases?

These are by no means simple questions. How and why particular circumstances may give rise to our experience of consciousness remain some of the most puzzling questions of our time.

Newborn babies, brain-damaged patients, complicated machines and animals may display signs of consciousness. However, the extent or nature of their experience remains a hotbed of intellectual enquiry.

Being able to quantify consciousness would go a long way toward answering some of these problems. From a clinical perspective, any theory that might serve this purpose also needs to be able to account for why certain areas of the brain appear critical to consciousness, and why the damage or removal of other regions appears to have relatively little impact.

One such theory has been gaining support in the scientific community. It’s called Integrated Information Theory (IIT), and was proposed in 2008 by Guilio Tononi, a US-based neuroscientist.

It also has one rather surprising implication: consciousness can, in principle, be found anywhere where there is the right kind of information processing going on, whether that’s in a brain or a computer.

Information and consciousness

The theory says that a physical system can give rise to consciousness if two physical postulates are met.

The first is that the physical system must be very rich in information.

If a system is conscious of an enormous number of things, like every frame in a film, but if each frame is clearly distinct, then we’d say conscious experience is highly differentiated.

Both your brain and your hard drive are capable of containing such highly differentiated information. But one is conscious and the other is not.

So what is the difference between your hard drive and your brain? For one, the human brain is also highly integrated. There are many billions of cross links between individual inputs that far exceed any (current) computer.

This brings us to the second postulate, which is that for consciousness to emerge, the physical system must also be highly integrated.

Whatever information you are conscious of is wholly and completely presented to your mind. For, try as you might, you are unable to segregate the frames of a film into a series of static images. Nor can you completely isolate the information you receive from each of your senses.

The implication is that integration is a measure of what differentiates our brains from other highly complex systems.

Integrated information and the brain

By borrowing from the language of mathematics, IIT attempts to generate a single number as a measure of this integrated information, known as phi (Φ, pronounced “fi”).

Something with a low phi, such as a hard drive, won’t be conscious. Whereas something with a high enough phi, like a mammalian brain, will be.

What makes phi interesting is that a number of its predictions can be empirically tested: if consciousness corresponds to the amount of integrated information in a system, then measures that approximate phi should differ during altered states of consciousness.

Recently, a team of researchers developed an instrument capable of measuring a related quantity to integrated information in the human brain, and tested this idea.

They used electromagnetic pulses to stimulate the brain, and were able to distinguish awake and anaesthetised brains from the complexity of the resulting neural activity.

The same measure was even capable of discriminating between brain injured patients in vegetative compared to minimally conscious states. It also increased when patients went from non-dream to the dream-filled states of sleep.

IIT also predicts why the cerebellum, an area at the rear of the human brain, seems to contribute only minimally to consciousness. This is despite it containing four times more neurons than the rest of the cerebral cortex, which appears to be the seat of consciousness.

The cerebellum has a comparatively simple crystalline arrangement of neurons. So IIT would suggest this area is information rich, or highly differentiated, but it fails IIT’s second requirement of integration.

Although there’s a lot more work to be done, some striking implications remain for this theory of consciousness.

If consciousness is indeed an emergent feature of a highly integrated network, as IIT suggests, then probably all complex systems – certainly all creatures with brains – have some minimal form of consciousness.

By extension, if consciousness is defined by the amount of integrated information in a system, then we may also need to move away from any form of human exceptionalism that says consciousness is exclusive to us.

The ConversationMatthew Davidson, Phd Candidate – Neuroscience of Consciousness, Monash University

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.


Leave a comment

Filed under Reblogs

Dennett on body vs mind

Prof. Daniel Dennett (born March 28, 1942) is an American philosopher, writer, and cognitive scientist whose research centers on the philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science.

Leave a comment

Filed under Quotations

Dennett on Consciousness and Free Will


Filed under Videos

Searle on consciousness

Philosopher Prof. John Searle lays out the case for studying human consciousness — and systematically shoots down some of the common objections to taking it seriously. As we learn more about the brain processes that cause awareness, accepting that consciousness is a biological phenomenon is an important first step. And no, he says, consciousness is not a massive computer simulation. (Filmed at TEDxCERN.)


December 19, 2014 · 7:29 am