Tag Archives: dilemma

Sometimes giving a person a choice is an act of terrible cruelty

by Lisa Tessman

It is not always good to have the opportunity to make a choice. When we must decide to take one action rather than another, we also, ordinarily, become at least partly responsible for what we choose to do. Usually this is appropriate; it’s what makes us the kinds of creatures who can be expected to abide by moral norms.

Sometimes, making a choice works well. For instance, imagine that while leaving the supermarket parking lot you accidentally back into another car, visibly denting it. No one else is around, nor do you think there are any surveillance cameras. You face a choice: you could drive away, fairly confident that no one will ever find out that you damaged someone’s property, or you could leave a note on the dented car’s windshield, explaining what happened and giving contact information, so that you can compensate the car’s owner.

Obviously, the right thing to do is to leave a note. If you don’t do this, you’ve committed a wrongdoing that you could have avoided just by making a different choice. Even though you might not like having to take responsibility – and paying up – it’s good to be in the position of being able to do the right thing.

Yet sometimes, having a choice means deciding to commit one bad act or another. Imagine being a doctor or nurse caught in the following fictionalised version of real events at a hospital in New Orleans in the aftermath of Hurricane Katrina in 2005. Due to a tremendous level of flooding after the hurricane, the hospital must be evacuated. The medical staff have been ordered to get everyone out by the end of the day, but not all patients can be removed. As time runs out, it becomes clear that you have a choice, but it’s a choice between two horrifying options: euthanise the remaining patients without consent (because many of them are in a condition that renders them unable to give it) or abandon them to suffer a slow, painful and terrifying death alone. Even if you’re anguished at the thought of making either choice, you might be confident that one action – let’s say administering a lethal dose of drugs – is better than the other. Nevertheless, you might have the sense that no matter which action you perform, you’ll be violating a moral requirement.

Are there situations, perhaps including this one, in which all the things that you could do are things that would be morally wrong for you to do? If the answer is yes, then there are some situations in which moral failure is unavoidable. In the case of the flooded hospital, what you morally should do is something impossible: you should both avoid killing patients without consent and avoid leaving them to suffer a painful death. You’re required to do the impossible.

To say this is to go against something that many moral philosophers believe. That’s because many moral philosophers have adopted a principle – attributed to the 18th-century German philosopher Immanuel Kant – that for an act to be morally obligatory, it must also be possible: so the impossible cannot be morally required. This principle is typically expressed by moral philosophers with the phrase: ‘Ought implies can.’ In other words, you can only be obligated to do something if you’re also able to do it.

This line of thought is certainly appealing. First of all, it might seem unfair to be obligated to do something that you were unable to do. Second, if morality is supposed to serve as a guide to help us decide what to do in any given situation, and we can’t actually do the impossible, it might seem that talking about impossible moral requirements is pointless. But if you’ve had the experience of being required to do the impossible, it might be appealing to push back: ought does not imply can. Acknowledging this could help make sense of your experience, even if it doesn’t also guide you in decisions about what to do.

We can’t blame other people for having committed an unavoidable moral wrongdoing as long as they chose the best of the possible options; we only appropriately blame people when they could have chosen to do something better than what they did do. However, when we ourselves are in situations in which we perform the best action we can – but it’s still something that we’d clearly be morally forbidden from ever choosing if we had a better option – we’re likely to hold ourselves responsible. Our intuitive moral judgments may still tell us, if we choose to perform an action that’s normally unthinkable, ‘I must not do this!’ Afterwards, we may judge ourselves to have failed morally.

I don’t think we should necessarily dismiss these judgments – rather, we must hold them up to the light. If we do so, and they hold up, then we should take them to indicate that we really can be required to do the impossible. But this has a troubling implication: if some situations lead to unavoidable moral wrongdoing, then we, as a society, should be careful not to put people in such situations. Giving people a choice might sound like it’s always a good thing to do, but giving a choice between two forms of moral failure is cruel.

Sometimes, it’s pure bad luck that puts someone in the position of having to choose between wrongdoings. However, much of the time, choice doesn’t take place in contexts that are shaped entirely accidentally. It takes place in social contexts. Social structures, policies, or institutions can produce outcomes that favour some groups of people over others in part by shaping what kinds of choices people get to – or have to – face. Members of some social groups might face mostly bad choices, in the sense that their choices are between alternatives, all of which are disadvantageous to them. But there’s another sense in which the choices might be bad: these might be choices between alternatives, all of which make them fail in their responsibilities to others.

The American Health Care Act, which was considered in the United States House and Senate, would have created moral dilemmas by offering people without high incomes – especially if they were also women, or old or sick – a range of bad options. It would have forced some parents to make choices between two equally unthinkable options, such as the ‘choice’ to sacrifice one child’s health care for another’s. This sort of forced choice would be similar in kind to the choice that the SS officer in Sophie’s Choice (1982) offered, when he told Sophie: ‘You may keep one of your children.’ The distinctive type of cruelty – making moral failure inevitable for someone – is the same.

No one can rightly be blamed for failing to care adequately for their family if it wasn’t possible for them to do so. But they may still take themselves to be required to do the impossible, and then judge themselves to have failed at this task. No one should be forced into this position. Not all situations that present these sorts of choices can be prevented – there’s always the possibility of bad luck – but at least we shouldn’t knowingly bring them about.

Moral Failure: On the Impossible Demands of Morality by Lisa Tessman is out now through Oxford University Press.Aeon counter – do not remove

This article was originally published at Aeon and has been republished under Creative Commons.

Leave a comment

Filed under Reblogs

The trolley dilemma: would you kill one person to save five?

The Conversation

Laura D’Olimpio, University of Notre Dame Australia

Imagine you are standing beside some tram tracks. In the distance, you spot a runaway trolley hurtling down the tracks towards five workers who cannot hear it coming. Even if they do spot it, they won’t be able to move out of the way in time.

As this disaster looms, you glance down and see a lever connected to the tracks. You realise that if you pull the lever, the tram will be diverted down a second set of tracks away from the five unsuspecting workers.

However, down this side track is one lone worker, just as oblivious as his colleagues.

So, would you pull the lever, leading to one death but saving five?

This is the crux of the classic thought experiment known as the trolley dilemma, developed by philosopher Philippa Foot in 1967 and adapted by Judith Jarvis Thomson in 1985.

The trolley dilemma allows us to think through the consequences of an action and consider whether its moral value is determined solely by its outcome.

The trolley dilemma has since proven itself to be a remarkably flexible tool for probing our moral intuitions, and has been adapted to apply to various other scenarios, such as war, torture, drones, abortion and euthanasia.

Variations

Now consider now the second variation of this dilemma.

Imagine you are standing on a footbridge above the tram tracks. You can see the runaway trolley hurtling towards the five unsuspecting workers, but there’s no lever to divert it.

However, there is large man standing next to you on the footbridge. You’re confident that his bulk would stop the tram in its tracks.

So, would you push the man on to the tracks, sacrificing him in order to stop the tram and thereby saving five others?

The outcome of this scenario is identical to the one with the lever diverting the trolley onto another track: one person dies; five people live. The interesting thing is that, while most people would throw the lever, very few would approve of pushing the fat man off the footbridge.

Thompson and other philosophers have given us other variations on the trolley dilemma that are also scarily entertaining. Some don’t even include trolleys.

Imagine you are a doctor and you have five patients who all need transplants in order to live. Two each require one lung, another two each require a kidney and the fifth needs a heart.

In the next ward is another individual recovering from a broken leg. But other than their knitting bones, they’re perfectly healthy. So, would you kill the healthy patient and harvest their organs to save five others?

Again, the consequences are the same as the first dilemma, but most people would utterly reject the notion of killing the healthy patient.

Inconsistent or are there other factors than consequences at play?

Actions, intentions and consequences

If all the dilemmas above have the same consequence, yet most people would only be willing to throw the lever, but not push the fat man or kill the healthy patient, does that mean our moral intuitions are not always reliable, logical or consistent?

Perhaps there’s another factor beyond the consequences that influences our moral intuitions?

Foot argued that there’s a distinction between killing and letting die. The former is active while the latter is passive.

In the first trolley dilemma, the person who pulls the lever is saving the life of the five workers and letting the one person die. After all, pulling the lever does not inflict direct harm on the person on the side track.

But in the footbridge scenario, pushing the fat man over the side is in intentional act of killing.

This is sometimes described as the principle of double effect, which states that it’s permissible to indirectly cause harm (as a side or “double” effect) if the action promotes an even greater good. However, it’s not permissible to directly cause harm, even in the pursuit of a greater good.

Thompson offered a different perspective. She argued that moral theories that judge the permissibility of an action based on its consequences alone, such as consequentialism or utilitarianism, cannot explain why some actions that cause killings are permissible while others are not.

If we consider that everyone has equal rights, then we would be doing something wrong in sacrificing one even if our intention was to save five.

Research done by neuroscientists has investigated which parts of the brain were activated when people considered the first two variations of the trolley dilemma.

They noted that the first version activates our logical, rational mind and thus if we decided to pull the lever it was because we intended to save a larger number of lives.

However, when we consider pushing the bystander, our emotional reasoning becomes involved and we therefore feel differently about killing one in order to save five.

Are our emotions in this instance leading us to the correct action? Should we avoid sacrificing one, even if it is to save five?

Real world dilemmas

The trolley dilemma and its variations demonstrate that most people approve of some actions that cause harm, yet other actions with the same outcome are not considered permissible.

Not everyone answers the dilemmas in the same way, and even when people agree, they may vary in their justification of the action they defend.

These thought experiments have been used to stimulate discussion about the difference between killing versus letting die, and have even appeared, in one form or another, in popular culture, such as the film Eye In The Sky.

In Eye in the Sky, military and political leaders have to decide whether it’s permissible to harm or kill one innocent person in order to potentially save many lives. Bleecker Street Media

The ConversationLaura D’Olimpio, Senior Lecturer in Philosophy, University of Notre Dame Australia

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

 

9 Comments

Filed under Reblogs

False dilemma

A false dilemma, or false dichotomy, is a logical fallacy that involves presenting two opposing views, options or outcomes in such a way that they seem to be the only possibilities: that is, if one is true, the other must be false, or, more typically, if you do not accept one then the other must be accepted. The reality in most cases is that there are many in-between or other alternative options, not just two mutually exclusive ones.

The logical form of this fallacy is as follows:

Premise 1: Either Claim X is true or Claim Y is true (when claims X and Y could both be false).

Premise 2: Claim Y is false.

Conclusion: Therefore Claim X is true.

This line of reasoning is fallacious because if both claims could be false, then it cannot be inferred that one is true because the other is false. This is made clear by the following example:

Either 1+1=4 or 1+1=12. It is not the case that 1+1=4. Therefore 1+1=12.

This fallacy should not be confused with the Law of Excluded Middle, where ‘true’ or ‘false’ are actually the only possible alternatives for a proposition.

It is worth noting that it is not a false dilemma to present two options out of many if no conclusion is drawn based on their exclusivity. For example ‘you can have tea or coffee’ is not a false dilemma. A fallacious form would require it be presented as an argument such as ‘you don’t want tea, therefore you must want coffee.’

For example, if somebody was to appear to demonstrate psychic abilities, one would commit the fallacy of false dilemma if one were to reason as follows: either she’s a fraud or she is truly psychic, and she’s not a fraud; so, she must be truly psychic.  There is at least one other possible explanation for her claim of psychic abilities: she genuinely thinks she’s psychic but she’s not.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

1 Comment

Filed under Logical fallacies