Tag Archives: moral philosophy

Consequentialism versus Justice

by Tim Harding

There are several objections to consequentialism as a basis for morality.  Some of these objections are of considerable scholarly interest to philosophers; but I think the most powerful objection is that adherence to consequentialism can in some cases result in unacceptable injustice.  My thesis is that justice is an important factor that needs to be taken into account in ethical theories.  I also intend to argue that the best response to this objection, which is to attempt to treat justice as an intrinsically valuable consequence of actions, is currently unworkable.

Consequentialism is traditionally a set of ethical theories where the morality of an act should be judged solely by its consequences.  An act is required just because it produces the best overall results (Shafer-Landau 2012: 119).  Two of the key words here, in my view, are ‘solely’ and ‘overall’.  Consequentialism solely takes into account the overall effects of an act on the population as a whole.  Specific effects on justice or the rights of individuals or minorities are not taken into account.

The most prominent version of consequentialism is act utilitarianism, where well-being is the only thing that is intrinsically valuable (Shafer-Landau 2012: 120).  The principle of utility states that ‘an action is morally required just because it does more to improve overall well-being than any other action you could have done in the circumstances’ (Shafer-Landau 2012: 120).  Whilst utilitarianism is not the only form of consequentialism, for my current purposes I will regard an objection to utilitarianism as an objection to consequentialism.

In its broadest sense, justice may be defined as fairness: a proper balance between competing claims or interests (Rawls 1971: 10-11).  Russ Shafer-Landau (2012: 145) says that to do justice is to respect rights, which is arguably similar in meaning to properly balancing competing claims or interests.

In stark contrast to act utilitarianism, John Rawls has described justice as ‘the first virtue of social institutions’ (Rawls 1971: 3).  He argues that:

Each person possesses an inviolability founded on justice that even the welfare of society as a whole cannot override.  For this reason justice denies that the loss of freedom for some is made right by a greater good shared by others.  It does not allow that the sacrifices imposed on a few are outweighed by the larger sum of advantages enjoyed by many (Rawls 1971: 3-4).

Indeed, according to Rawls (1971: 4) justice is uncompromising: an injustice is tolerable only when it necessary to avoid an even greater injustice.

The conflict between consequentialism and justice can be illustrated by some thought experiments, starting with the well-known trolley problem, the modern version of which was first described by Philippa Foot (1967: 8) as follows:

Suppose that a judge or magistrate is faced with rioters demanding that a culprit be found for a certain crime and threatening otherwise to take their own bloody revenge on a particular section of the community. The real culprit being unknown, the judge sees himself as able to prevent the bloodshed only by framing some innocent person and having him executed. Beside this example is placed another in which a pilot whose airplane is about to crash is deciding whether to steer from a more to a less inhabited area. To make the parallel as close as possible it may rather be supposed that he is the driver of a runaway tram which he can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed. In the case of the riots the mob have five hostages, so that in both examples the exchange is supposed to be one man’s life for the lives of five.

Foot (1967: 8) reasonably asks why we should say that the driver should steer for the less occupied track, while most of us would be appalled at the idea that an innocent man should be framed and executed.  Yet in both cases, the act is done for utilitarian reasons, to maximise overall well-being.  The interests or rights of the individual who is to be killed are rated no higher than anybody else in this scenario.  The dead individual is counted merely as a pro-rata contribution to the overall aggregated well-being.  Five lives are worth more than one life, regardless of the circumstances.  The end justifies the means.

This problem has been adapted and analysed in some detail by Judith Jarvis Thomson (1985: 1395-1415).  Thomson argues that whilst most people would say that is morally permissible to steer the trolley tram away from the five men on one track towards one man on the other track, they would not regard the killing of one person to save five as permissible in other cases with similar consequences.  For instance, Thomson (1985: 1396) asks us to consider another case that she calls the ‘transplant case’:

This time you are to imagine yourself to be a surgeon, a truly great surgeon. Among other things you do, you transplant organs, and you are such a great surgeon that the organs you transplant always take. At the moment you have five patients who need organs. Two need one lung each, two need a kidney each, and the fifth needs a heart. If they do not get those organs today, they will all die; if you find organs for them today, you can transplant the organs and they will all live. But where to find the lungs, the kidneys, and the heart? The time is almost up when a report is brought to you that a young man who has just come into your clinic for his yearly check-up has exactly the right blood-type, and is in excellent health. Lo, you have a possible donor. All you need do is cut him up and distribute his parts among the five who need them. You ask, but he says, “Sorry. I deeply sympathize, but no.” Would it be morally permissible for you to operate anyway?

Thomson (1985: 1396) asks why is it that the trolley driver may turn his trolley and kill a man on the other tram track, though the surgeon may not kill the young man and remove his organs?  In both cases, one will die if the agent acts, but five will live who would otherwise die – a net saving of four lives.  As the consequences are the same in each case, utilitarianism would allow both acts to take place.  The difference in moral permissibility in these two cases with similar outcomes indicates a serious problem with utilitarianism as an ethical theory.

Russ Shafer-Landau (2012: 144-146) identifies this problem as injustice, meaning the violation of rights, such as the right of the healthy young man in the transplant case to not be murdered for his organs.  On the other hand, I would argue that turning the trolley is not an injustice because unlike the healthy young man in transplant, the tram track workers on both tracks have implicitly consented to take the normally small risk of being hit by a runaway trolley. (There is also a difference between these two cases in respect for autonomy – the implied consent to risk in the case of the trolley track workers versus the explicit refusal of consent in the transplant case. However, that is an issue for another essay).

Shafer-Landau (2012: 144) in fact argues that injustice is perhaps the greatest problem for utilitarianism.  He says that ‘moral theories should not permit, much less require, that we act unjustly.  Therefore, there is something deeply wrong about utilitarianism’ (Shafer-Landau 2012: 145).

Shafer-Landau (2012: 145) strengthens the case against utilitarianism with some real historical examples rather than just thought experiments.  He cites wartime cases of vicarious punishment where innocent people are deliberately targeted as a way to deter the guilty; and exemplary punishment where random prisoners are shot to deter resistance or escapes.  Such punishments are now treated as violations of human rights and war crimes; but in earlier wars such punishments could have been justified according to utilitarianism.

Shafer-Landau (2012: 146-148) goes on to identify some potential solutions to the problem of injustice.  To assist in analysing and evaluating these potential solutions, Shafer-Landau formally states his Argument from Injustice as follows:

  1. The correct moral theory will never require us to commit serious injustices.
  2. Utilitarianism sometimes requires us to commit serious injustices.
  3. Therefore utilitarianism is not the correct moral theory.

One of his potential solutions is to say that justice must sometimes be sacrificed for sake of overall well-being.  I do not think that this would solve the problem at all.  Unacceptable injustices would still occur, and I support Rawls abovementioned view that justice is uncompromising – an injustice is tolerable only when it necessary to avoid an even greater injustice.  Another potential solution is to deny Premise 2 above, that is to deny that utilitarianism requires us to commit injustice.  Under this solution, adjustments will naturally be made to scenarios to ensure that maximising overall well-being produces a just outcome.  Shafer-Landau (2012: 148) regards this solution as overly optimistic, and I agree.

I think the best of these potential solutions is to attempt to build justice into the calculation of intrinsic value, alongside overall well-being.  In this way, the consequences of an action should try to maximise justice in addition to well-being.  Shafer-Landau (2012: 147) argues that sometimes a very minor injustice can be justifiably traded off in favour of an overwhelming increase in well-being.  However, giving roughly equal weight to both well-being and justice is problematic due to the lack of any principles for deciding between these two values where they conflict.  Also, as Rawls (1971: 4) has argued, justice is uncompromising – there can be no such thing as ‘half-justice’. For these reasons, I support Shafer-Landau’s view that this solution is currently unworkable.

In this essay, I have endeavoured to show how consequentialism can sometimes result in injustice, by reference to some notable philosophical thought experiments, as well as to some historical wartime cases.  I have cited the work of John Rawls to argue that justice is an important factor that needs to be taken into account in ethical theories.  I have considered some potential solutions to the problem of injustice, and argued against what I think is the best solution to this problem.  For these reasons, I conclude that the injustice objection to consequentialism should be upheld.


Foot, Philippa. ‘The Problem of Abortion and the Doctrine of the Double Effect’ Oxford Review, no. 5, 1967, 5-15.

Rawls, John. 1971. A Theory of Justice. Cambridge: Harvard University Press.

Shafer-Landau, Russ. 2012. The Fundamentals of Ethics, 2nd edition. Oxford: Oxford University Press.

Thomson, Judith Jarvis. 1985. ‘The Trolley Problem’ The Yale Law Journal, Vol. 94, No. 6 (May, 1985), pp. 1395-1415.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button


Filed under Essays and talks

We don’t need no (moral) education? Five things you should learn about ethics

The Conversation

Patrick Stokes, Deakin University

The human animal takes a remarkably long time to reach maturity. And we cram a lot of learning into that time, as well we should: the list of things we need to know by the time we hit adulthood in order to thrive – personally, economically, socially, politically – is enormous.

But what about ethical thriving? Do we need to be taught moral philosophy alongside the three Rs?

Ethics has now been introduced into New South Wales primary schools as an alternative to religious instruction, but the idea of moral philosophy as a core part of compulsory education seems unlikely to get much traction any time soon. To many ears, the phrase “moral education” has a whiff of something distastefully Victorian (the era, not the state). It suggests indoctrination into an unquestioned set of norms and principles – and in the world we find ourselves in now, there is no such set we can all agree on.

Besides, in an already crowded curriculum, do we really have time for moral philosophy? After all, most people manage to lead pretty decent lives without knowing their Sidgewick from their Scanlon or being able to spot a rule utilitarian from 50 yards.

But intractable moral problems don’t go away just because we no longer agree how to deal with them. And as recent discussions on this site help to illustrate, new problems are always arising that, one way or another, we have to deal with. As individuals and as participants in the public space, we simply can’t get out of having to think about issues of right and wrong.

Yet spend time hanging around the comments section of any news story with an ethical dimension to it (and that’s most of them), and it quickly becomes apparent that most people just aren’t familiar with the methods and frameworks of ethical reasoning that have been developed over the last two and a half thousand years. We have the tools, but we’re not equipping people with them.

So, what sort of things should we be teaching if we wanted to foster “ethical literacy”? What would count as a decent grounding in moral philosophy for the average citizen of contemporary, pluralistic societies?

What follows is in no way meant to be definitive. It’s not based on any sort of serious empirical data around people’s familiarity with ethical issues. It’s a just tentative stab (wait, can you stab tentatively?) at a list of things people should ideally know about ethics, and based, on what I see in the classroom and, online, often don’t.

1. Ethics and morality are (basically) the same thing

Many people bristle at the word “morality” but are quite comfortable using the term “ethical”, and insist there’s some crucial difference between the two. For instance, some people say ethics are about external, socially imposed norms, while morality is about individual conscience. Others say ethics is concrete and practical while morality is more abstract, or is somehow linked to religion.

Out on the value theory front lines, however, there’s no clear agreed distinction, and most philosophers use the two terms more or less interchangeably. And let’s face it: if even professional philosophers refuse to make a distinction, there probably isn’t one there to be made.

2. Morality isn’t (necessarily) subjective

Every philosophy teacher probably knows the dismay of reading a decent ethics essay, only to then be told in the final paragraph that, “Of course, morality is subjective so there is no real answer”. So what have the last three pages been about then?

There seems to be a widespread assumption that the very fact that people disagree about right and wrong means there is no real fact of the matter, just individual preferences. We use the expression “value judgment” in a way that implies such judgments are fundamentally subjective.

Sure, ethical subjectivism is a perfectly respectable position with a long pedigree. But it’s not the only game in town, and it doesn’t win by default simply because we haven’t settled all moral problems. Nor does ethics lose its grip on us even if we take ourselves to be living in a universe devoid of intrinsic moral value. We can’t simply stop caring about how we should act; even subjectivists don’t suddenly turn into monsters.

3. “You shouldn’t impose your morality on others” is itself a moral position.

You hear this all the time, but you can probably spot the fallacy here pretty quickly: that “shouldn’t” there is itself a moral “shouldn’t” (rather than a prudential or social “shouldn’t,” like “you shouldn’t tease bears” or “you shouldn’t swear at the Queen”). Telling other people it’s morally wrong to tell other people what’s morally wrong looks obviously flawed – so why do otherwise bright, thoughtful people still do it?

Possibly because what the speaker is assuming here is that “morality” is a domain of personal beliefs (“morals”) which we can set aside while continuing to discuss issues of how we should treat each other. In effect, the speaker is imposing one particular moral framework – liberalism – without realising it.

4. “Natural” doesn’t necessarily mean “right”

This is an easy trap to fall into. Something’s being “natural” (if it even is) doesn’t tell us that it’s actually good. Selfishness might turn out to be natural, for instance, but that doesn’t mean it’s right to be selfish.

This gets a bit more complicated when you factor in ethical naturalism or Natural Law theory, because philosophers are awful people and really don’t want to make things easy for you.

5. The big three: Consequentialism, Deontology, Virtue Ethics

There’s several different ethical frameworks that moral philosophers use, but some familiarity with the three main ones – consequentialism (what’s right and wrong depends upon consequences); deontology (actions are right or wrong in themselves); and virtue ethics (act in accordance with the virtues characteristic of a good person) – is incredibly useful.

Why? Because they each manage to focus our attention on different, morally relevant features of a situation, features that we might otherwise miss.

So, that’s my tentative stab (still sounds wrong!). Do let me know in the comments what you’d add or take out.

This is part of a series on public morality in 21st century Australia. We’ll be publishing regular articles on morality on The Conversation in the coming weeks.

The ConversationThis article was originally published on The Conversation. (Reblogged by permission). Read the original article.

Leave a comment

Filed under Reblogs

Free your mind – but are there ideas we shouldn’t contemplate?

The Conversation
By Matthew Beard, University of Notre Dame Australia

You’re a free thinker – congratulations – but does that mean you can, and should, approach everything with an open mind? Let me try to convince you you shouldn’t.

I do not want to argue with him: he shows a corrupt mind.

So remarked Elizabeth Anscombe (1919-2001), a giant of 20th century moral philosophy. She was referring to the kind of person who is open to being convinced of something that is intrinsically unjust, such as a court judicially punishing an innocent man.

This seems to be the antithesis of what a moral philosopher ought to do. Her judgement seems to display a dogmatic close-mindedness to the free thinking that philosophy typifies, and an intolerant disposition toward different ideas. To dismiss the reflective, well-considered thinking of another person – even if it leads to uncomfortable conclusions – is the stuff of ideology, not philosophy.

Or so it seems.

Philosophy is, as any undergraduate student has been told, a love of wisdom and a quest for truth. Philosophers are good at recognising the complexity of truth, and accepting that there is merit in a wide range of different positions. They are also good at explaining why common assumptions are oftentimes problematic, and are therefore masters of qualifying terms:

I agree, but; Yes, insofar as; I think that’s true, on the condition that …

Philosophy begins, as Aristotle remarked, with curiosity and wonder.

Socrates, the pedagogical role model of Western philosophy, saw himself as a gadfly whose constant questioning of unreflective beliefs stung the vacuous horse of the Athenian political system.

Immanuel Kant similarly described the work of David Hume as having awakened him from his “dogmatic slumber”. Once awakened, a mind is hungry for and open to a close and authentic engagement with the truth – this hunger, like the taste for Pringles, is hard to stop once it has begun.

Is everything open to questioning? Are certain things so patently unethical that even being open to believing in them if one hears a persuasive enough argument is demonstrative of a deficient character?

Riccardo Romano

Everyone believes, to borrow an example from Quentin Tarantino’s revenge film, Kill Bill, that sexually abusing a person who had fallen into a coma from which she is not expected to wake is wrong.

What, however, are we to think of the person who argues that “at the moment I think that those practices are immoral, but I’m open to being convinced otherwise”? Is this a virtuous commitment to truth – or a cold and de-personalised detachment from morality?

Reasonable disagreement on complex issues such as commercial surrogacy, the extent of the right to privacy, or same-sex marriage isn’t demonstrative of anything other than the importance of the goods at stake and the wonderful capacity of human beings to form their own opinion.

In such cases tolerance, open-mindedness and respectful debate are virtues of utmost importance. But as Patrick Stokes has argued already in this series, just because we haven’t settled every moral question doesn’t mean that truth is completely subjective. Just because people disagree on some matters doesn’t mean that they do, or even should, disagree on all of them.

One of the mistakes people often make about moral philosophy is that once one becomes a philosopher, one must discover the truth by themselves. Melbourne philosopher Raimond Gaita describes the consensus view of the true philosopher as being so strongly committed to truth that he or she should “follow the argument wherever it leads”.

Gaita is rightly critical of this position but nevertheless the belief prevails: to shirk hard truths is not becoming of a philosopher. It betrays truth, cowing to popular opinion and a deference to assumption that undermines the very practice of philosophy.

Except that philosophy is itself a moral activity.

Philosophy isn’t (primarily) a profession, nor is it a tool of argument. Philosophy is a way of living and being in the world, and the philosopher is, like every other person, shaping him or herself through reflection, questioning, and analysis.

Should I ever allow myself to become a person who believes that the rape of a comatose person, or any other person, is justified, or – to cite a recent controversy – that “after-birth abortion” (also known as infanticide) might be a justifiable practice?

Of this thinking, Gaita remarks that, “were my commitment to philosophy to tempt to me such nihilism, I would give up philosophy, fearful of what I was becoming.”

I think Gaita is right, but it is important here to distinguish between discussion of a belief and the belief itself. In a Western society, no discussion should be taboo.

The Festival of Dangerous Ideas (FODI) faced a host of criticism for arranging a talk entitled “Honour Killings are Morally Justified”. I think it would be wrong to host such a talk with the hope or belief that people might be persuaded of its truth – but I don’t think hosting a talk on honour killings, with the intention of understanding how the practice is justified by some, of hearing why it takes place, could ever be condemned as immoral.

As I argued at the time, FODI organisers were wrong to title the talk as they did, but they weren’t wrong to want such a talk to occur.

We can discuss which beliefs are ones which it is simply wrong to be open to persuasion regarding; in some ways, that might be a matter of private determination, but we ought to agree that such things exist.

Indeed, any truth, once we recognise it to be true, ought to be clung to. “Test everything. Hold fast to that which is good.” wrote St Paul to the Thessalonians. Be willing to listen, but recognise that what one is willing to be convinced of, or what one is willing to be persuaded from, is itself a moral choice.

“No man wishes to possess the whole world,” Aristotle wrote, “if he must first become somebody else.”

Part of what defines a person, a society, and humanity, must be what we refuse, absolutely, to allow ourselves to become – not only as actors, but as thinkers too.

This is part of a series on public morality in 21st century Australia. We’ll be publishing regular articles on morality on The Conversation in the coming weeks.

The ConversationMatthew Beard does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

This article was originally published on The Conversation. (Republished with permission). Read the original article.

Leave a comment

Filed under Reblogs