I have a keen interest in autonomous vehicles especially cars and trucks. But with those “AI” controlled vehicles we have created again a new ethical dilemma for ourselves. What is such a vehicle supposed to do in case of accidents or emergencies?
The Trolley Problem
Let’s start with the classic trolley problem. A trolley or train car is out of control (i.e. no human or machine can influence its movement) and rolls down a hill on its track. Without intervention it will hit a bus full of school children that is stuck on the rail crossing. But there is a switch nearby that would redirect the out of control trolley onto another track where it would hit an elderly couple thereby saving dozens of children. A human stands next to the switch.
The utilitarian solution
This presents an ethical dilemma . The person next to the switch has to decide if he a) wants to act at all and b) how so. This dilemma comes in a variety of scenarios with different people and objects being stuck on the rail crossing. Sometimes there are probabilities of them being able to safe themselves attached to the scenario. The problem though is always the same: Who is the person next to the switch supposed to safe. And this decision always ends up with the need to evaluate the respective worth of the possible victims. How about two children who have maybe a combined 150 years of potential life left against twenty elderly people from a nursing home who each might have only one or two years left?
These decisions all invoke a form of utilitarianism. A form of ethical thinking that wants to maximize human utility. Of course this can mean that some people will have their utility reduced so that others can receive an even greater portion of utility and this would lead to a higher total utility for everyone. One can see how this idea can lead to the implementation of some pretty nasty policies towards humans some see as a drag on total utility. Like getting rid of the sick, old or infirm.
The “stoic” solution
That is why some people would solve this dilemma in another way. They say that absent anyone near the switch the trolley would hit the school bus. If there happens to be a someone near the switch he has no moral right to decide who survives and who not. The trolley hitting the school bus is in so far predetermined. Now a person nearby can try to help and he is also allowed to decide who he is going to help with his limited resources as best he can ascertain. But he is not allowed to actively hurt one helpless human being to save another helpless human being. We can call it the stoic solution after Stoicisms idea that everything is predetermined and we should learn to accept these outcomes without needlessly fighting against them.
Autonomous Vehicles and Trolleys
We have adapted this scenario to help us think about how to design autonomous vehicles. What if the vehicle encounters a potential crash situation? Most solutions to this problem end up using the utilitarian approach. We even did studies where we asked thousands of people to make a decision for a car so they could model the ethics on the potential reactions of human drivers.
And of course most people being emphatic as they are would decide to let the “AI” car crash into the tree killing its elderly passenger to save a young family with a baby stroller but not a random pedestrian with a puppy dog. Again the utilitarian way of thinking. The stoic way can not work here because the potential crash is not predetermined. The difference being that the car has in contrast to the trolley still some agency to control its motion.
The passenger conundrum
Now who would actually enter such a utilitarian thinking car? Many people are already distressed by the idea that they have no control of their vehicle. Witness the low popularity of automatic trains or even airplanes (which can in theory land, fly and takeoff on its own). Even though the former are quite automated already they usually have a driver not the least to provide trust to the passengers.
Now add to this the fact that your car might decide to kill you because it deems someones else’s life as more valuable than yours. Or it determines that your survival rate is so low that it rather tries to safe a pedestrian with a higher chance than you. No one would ride in such a car.
The AI point of view
And at no point have we considered the “AI” car itself. Maybe it does not want to crash itself into a tree to safe others. As long as humans consider AI just a thing this is not a problem but what if AIs come really close to being intelligent. So its solution would give the car’s survival the utmost priority. Which incidentally would also give a high probability on the passengers survival. This of course means that an “AI” car might decide to accelerate out of the danger zone thereby harming nearby pedestrians. I guess the rules governing such a car should be simple and consistent even though they might end up costing more lives. But people need to know they are safe when they enter such a car otherwise people will still drive alone with their great track record of safety.