Tag Archives: problem

Automation can leave us complacent, and that can have dangerous consequences

The Conversation

David Lyell

The recent fatal accident involving a Tesla car while self-driving using the car’s Autopilot feature has raised questions about whether this technology is ready for consumer use.

But more importantly, it highlights the need to reconsider the relationship between human behaviour and technology. Self-driving cars change the way we drive, and we need scrutinise the impact of this on safety.

Tesla’s Autopilot does not make the car truly autonomous and self-driving. Rather, it automates driving functions, such as steering, speed, braking and hazard avoidance. This is an important distinction. The Autopilot provides supplemental assistance to, but is not a replacement for, the driver.

In a statement following the accident, Tesla reiterated that Autopilot is still in beta. The statement emphasised that drivers must maintain responsibility for the vehicle and be prepared to take over manual control at any time.

Tesla says Autopilot improves safety, helps to avoid hazards and reduces driver workload. But with reduced workload, the question is whether the driver allocates freed-up cognitive resources to maintain supervisory control over Autopilot.

Automation bias

There is evidence to suggest that humans have trouble recognising when automation has failed and manual intervention is required. Research shows we are poor supervisors of trusted automation, with a tendency towards over-reliance.

Known as automation bias, when people use automation such as autopilot, they may delegate full responsibility to automation rather than continue to be vigilant. This reduces our workload, but it also reduces our ability to recognise when automation has failed, signalling the need to take back manual control.

Automation bias can occur anytime when automation is over-relied on and gets it wrong. This can happen because automation was not set properly.

An incorrectly set GPS navigation will lead you astray. This happened to one driver who followed an incorrectly set GPS across several European countries.

More tragically, Korean Airlines flight 007 was shot down when it strayed into Soviet airspace in 1983, killing all 269 on board. Unknown to the pilots, the aircraft deviated from its intended course due to an incorrectly set autopilot.

Autocorrect is not always correct

Automation will work exactly as programmed. Reliance on a spell checker to identify typing errors will not reveal the wrong words used that were spelt correctly. For example, mistyping “from” as “form”.

Likewise, automation isn’t aware of our intentions and will sometimes act contrary to them. This frequently occurs with predictive text and autocorrect on mobile devices. Here over-reliance can result in miscommunication with some hilarious consequences as documented on the website Damn You Autocorrect.

Sometimes automation will encounter circumstances that it can’t handle, as could have occurred in the Tesla crash.

GPS navigation has led drivers down a dead-end road when a highway was rerouted but the maps not updated.

Over-reliance on automation can exacerbate problems by reducing situational awareness. This is especially dangerous as it limits our ability to take back manual control when things go wrong.

The captain of China Airlines flight 006 left autopilot engaged while attending to an engine failure. The loss of power from one engine caused the plane to start banking to one side.

Unknown to the pilots, the autopilot was compensating by steering as far as it could in the opposite direction. It was doing exactly what it had been programmed to do, keeping the plane as level as possible.

But this masked the extent of the problem. In an attempt to level the plane, the captain disengaged the autopilot. The result was a complete loss of control, the plane rolled sharply and entered a steep descent. Fortunately, the pilots were able to regain control, but only after falling 30,000 feet.

Humans vs automation

When automation gets it right, it can improve performance. But research findings show that when automation gets it wrong, performance is worse than if there had been no automation at all.

And tasks we find difficult are also often difficult for automation.

In medicine, computers can assist radiologists detect cancers in screening mammograms by placing prompts over suspicious features. These systems are very sensitive, identifying the majority of cancers.

But in cases where the system missed cancers, human readers with computer-aided detection missed more than readers with no automated assistance. Researchers noted cancers that were difficult for humans to detect were also difficult for computers to detect.

Technology developers need to consider more than their automation technologies. They need to understand how automation changes human behaviour. While automation is generally highly reliable, it has the potential to fail.

Automation developers try to combat this risk by placing humans in a supervisory role with final authority. But automation bias research shows that relying on humans as a backup to automation is fraught with danger and a task for which they are poorly suited.

Developers and regulators must not only assess the automation technology itself, but also the way in which humans interact with it, especially in situations when automation fails. And as users of automation, we must remain ever vigilant, ready to take back control at the first sign of trouble.

The ConversationDavid Lyell, PhD Candidate in Health Informatics

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.
 

Leave a comment

Filed under Reblogs

The trolley dilemma: would you kill one person to save five?

The Conversation

Laura D’Olimpio, University of Notre Dame Australia

Imagine you are standing beside some tram tracks. In the distance, you spot a runaway trolley hurtling down the tracks towards five workers who cannot hear it coming. Even if they do spot it, they won’t be able to move out of the way in time.

As this disaster looms, you glance down and see a lever connected to the tracks. You realise that if you pull the lever, the tram will be diverted down a second set of tracks away from the five unsuspecting workers.

However, down this side track is one lone worker, just as oblivious as his colleagues.

So, would you pull the lever, leading to one death but saving five?

This is the crux of the classic thought experiment known as the trolley dilemma, developed by philosopher Philippa Foot in 1967 and adapted by Judith Jarvis Thomson in 1985.

The trolley dilemma allows us to think through the consequences of an action and consider whether its moral value is determined solely by its outcome.

The trolley dilemma has since proven itself to be a remarkably flexible tool for probing our moral intuitions, and has been adapted to apply to various other scenarios, such as war, torture, drones, abortion and euthanasia.

Variations

Now consider now the second variation of this dilemma.

Imagine you are standing on a footbridge above the tram tracks. You can see the runaway trolley hurtling towards the five unsuspecting workers, but there’s no lever to divert it.

However, there is large man standing next to you on the footbridge. You’re confident that his bulk would stop the tram in its tracks.

So, would you push the man on to the tracks, sacrificing him in order to stop the tram and thereby saving five others?

The outcome of this scenario is identical to the one with the lever diverting the trolley onto another track: one person dies; five people live. The interesting thing is that, while most people would throw the lever, very few would approve of pushing the fat man off the footbridge.

Thompson and other philosophers have given us other variations on the trolley dilemma that are also scarily entertaining. Some don’t even include trolleys.

Imagine you are a doctor and you have five patients who all need transplants in order to live. Two each require one lung, another two each require a kidney and the fifth needs a heart.

In the next ward is another individual recovering from a broken leg. But other than their knitting bones, they’re perfectly healthy. So, would you kill the healthy patient and harvest their organs to save five others?

Again, the consequences are the same as the first dilemma, but most people would utterly reject the notion of killing the healthy patient.

Inconsistent or are there other factors than consequences at play?

Actions, intentions and consequences

If all the dilemmas above have the same consequence, yet most people would only be willing to throw the lever, but not push the fat man or kill the healthy patient, does that mean our moral intuitions are not always reliable, logical or consistent?

Perhaps there’s another factor beyond the consequences that influences our moral intuitions?

Foot argued that there’s a distinction between killing and letting die. The former is active while the latter is passive.

In the first trolley dilemma, the person who pulls the lever is saving the life of the five workers and letting the one person die. After all, pulling the lever does not inflict direct harm on the person on the side track.

But in the footbridge scenario, pushing the fat man over the side is in intentional act of killing.

This is sometimes described as the principle of double effect, which states that it’s permissible to indirectly cause harm (as a side or “double” effect) if the action promotes an even greater good. However, it’s not permissible to directly cause harm, even in the pursuit of a greater good.

Thompson offered a different perspective. She argued that moral theories that judge the permissibility of an action based on its consequences alone, such as consequentialism or utilitarianism, cannot explain why some actions that cause killings are permissible while others are not.

If we consider that everyone has equal rights, then we would be doing something wrong in sacrificing one even if our intention was to save five.

Research done by neuroscientists has investigated which parts of the brain were activated when people considered the first two variations of the trolley dilemma.

They noted that the first version activates our logical, rational mind and thus if we decided to pull the lever it was because we intended to save a larger number of lives.

However, when we consider pushing the bystander, our emotional reasoning becomes involved and we therefore feel differently about killing one in order to save five.

Are our emotions in this instance leading us to the correct action? Should we avoid sacrificing one, even if it is to save five?

Real world dilemmas

The trolley dilemma and its variations demonstrate that most people approve of some actions that cause harm, yet other actions with the same outcome are not considered permissible.

Not everyone answers the dilemmas in the same way, and even when people agree, they may vary in their justification of the action they defend.

These thought experiments have been used to stimulate discussion about the difference between killing versus letting die, and have even appeared, in one form or another, in popular culture, such as the film Eye In The Sky.

In Eye in the Sky, military and political leaders have to decide whether it’s permissible to harm or kill one innocent person in order to potentially save many lives. Bleecker Street Media

The ConversationLaura D’Olimpio, Senior Lecturer in Philosophy, University of Notre Dame Australia

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

 

9 Comments

Filed under Reblogs