Should Your Self-Driving Car Kill You To Save Others?

Bioethicists at the University of Alabama at Birmingham have started considering the programming of self-driving cars faced with inevitable crash situations and have come up with worrying ethical dilemmas. In an unavoidable crash, will your car be programmed to sacrifice you in order to save more strangers in nearby vehicle? Who can be responsible for making such a call?

Google's self-driving car: how will an autonomous vehicle like this one respond to ethical considerations? Image by Steve Jurvetson.Google's self-driving car: how will an autonomous vehicle like this one respond to ethical considerations? Image by Steve Jurvetson.

Self-driving car technology is advancing at a remarkably rapid rate. What seemed like science fiction just a decade or two ago, is now fast approaching reality. Google’s autonomous cars are a regular sight on California’s highways, Volvo has proposed a commercial self-driving car in Sweden by 2017, and self-driving Teslas could hit the road within the year. While the public worries about the safety of these vehicles and scientists, engineers and programmers work to optimize their performance, few are considering the ethical dilemmas that arise when the responsibility for human lives is handed to a non-sentient machine.

The dilemma is a modern retelling of the famous Trolley Problem. In one variation, imagine you are the operator of a railway switch and you see a train car hurtling toward a group of five children playing on the tracks. You can flip the switch to divert the train onto a separate track, thereby saving the five children, but notice that your own child has stumbled onto the alternate track. What do you do? What would a computer do when the decision is based purely on logic and unclouded by emotion or compassion?

In the case of self-driving car, the question comes up in crash situations. Despite the fact that autonomous cars will be safer drivers than humans, there will still arise circumstances where a collision is inevitable; something on the road causes a tire to blow, rock fall litters the roadway in front of you, or a pedestrian or cyclist makes a sudden unpredictable move. Unlike human drivers, the computer controlling the car will have the processing speed to evaluate all the options. Should it plow you directly into a concrete barrier in order to save more of the others on the road?

Ethicists classify the two viewpoints on this situation as utilitarianism and deontology. Utilitarianism aims for the most number of happy people: the car kills you to save a greater number of others, you flip the switch and kill your child in order to save the children of five other families. Deontology states that certain actions are categorically bad and should never be justified: flipping the switch to kill one child is an unacceptable act of murder whereas standing by and letting the train hit the five children is passive, programming a car to choose the death of one person is programming it to be an active killer.

So what did the University of Alabama ethicists conclude? They didn’t. These sorts of questions are incredibly difficult to resolve and the most important thing at this point is to make sure the discussion is happening. In the rush to bring new technologies to market, ethical considerations are often kept on the backburner, even when human lives may be at stake. So what do you think?

Via the University of Alabama at Birmingham and Science Daily.