By: Caroline Wechsler
Nearly anyone in an intro philosophy class, and indeed most people who have some degree of mainstream intellectual knowledge, will recognize the beginnings of the infamous trolley problem: you are the driver of a speeding trolley, and ahead of you on the track are five people. You try to stop the trolley by pushing the brakes, but they do not work – the situation looks hopeless. However, there is a branch off to the right of the track where only one unsuspecting person stands. Turning onto this track would allow you to spare the five people, but would result in the killing of one by an active choice. Should you turn the trolley?
Up until very recently, this problem has presented a classic but laughably implausible thought experiment. However, a new technology may bring a version of it to much greater urgency: self-driving cars.
The trolley problem, first conceived by philosopher Philippa Foot in 1967, presents an interesting conundrum, because it forces an individual to make a choice that will inevitably result in the death of at least one person (1). A utilitarian, concerned with creating the greatest good for the greatest number, typically advocates for turning the trolley because this would kill one person as opposed to five, and five deaths is worse than one. Other approaches would suggest that to turn the trolley makes you morally responsible for the death while inaction does not make you directly responsible, therefore you should not turn the trolley. A deontologist, for instance, operating under the premise that killing is always wrong, would advocate for leaving the trolley on its path. Popular consensus suggests that it is better to kill the one rather than the five because most people subscribe to utilitarianism.
Self-driving cars transition the trolley problem from a thought experiment into a frighteningly real scenario. Imagine you are driving and five individuals walk out into the road in front of you; you can either hit them or swerve and hit a cyclist, killing him or her – which is the right choice? Admittedly, this is the sort of decision that humans driving cars make all the time; however, programming a car to follow a certain decision-making pathway means subscribing definitively to one ideology or another.
A further and more troubling complication is whether a self-driving car should prioritize its passengers over other individuals. For instance, say you’re driving on a curvy mountain road, and around the bend comes a school bus full of children, heading directly for you. Should the autonomous car swerve so as to avoid the bus and save the many children, even though this will send the driver off the cliff?
This case complicates that clear utilitarian decision-making of the original trolley problem. Surveys show that more than 75% agree that sacrificing the passenger is the morally correct choice (2). They approve of self-driving cars programmed to act in a utilitarian manner – but the vast majority of study participants said they do not themselves want to ride in such vehicles (2). Attaining the greater good is a less straight forward choice if they are not guaranteed to be among the beneficiaries. Not only do individuals wish to protect themselves, but many are also wary of giving moral decisions over to a machine. Washington Post columnist Matt McFarland sums the situation up thusly: “Humans are freaking out about the trolley program because we’re terrified of the idea of machines killing us. But if we were totally rational, we’d realize 1 in 1 million people getting killed by a machine beats 1 in 100,000 getting killed by a human” (3).
This puts manufacturers in a tricky spot: protect the passengers who own the car, or promote the greater good?
Survey results suggest that for cars to be marketable and therefore financially they should be programmed against what is better for the common good in these instances. It’s possible, thought, that the general safety benefits of transitioning to self-driving cars might outweigh these harms, as McFarland points out (3) – these benefits might go unrealized unless people are persuaded to purchase the cars. One more option for car manufacturers would be to program each car according to its owner’s wishes. But this introduces a whole new set of legal and moral issues: if an owner knowingly chooses an algorithm which may result in traffic deaths of others, is she responsible for those deaths even though she is not in control of the vehicle? Such questions have no clear answers in law or indeed in philosophical debate. So how do we program cars to act in these ambiguous cases?
Self-driving cars are far from perfect in their current forms: they have difficulty driving in congested areas, (1) example) (3). In fact, one persistent problem with self-driving cars is that they follow the law all the time, to a fault. For instance, they always follow the speed limit, which seems obviously correct until one tries to merge onto a highway and cannot exceed the limit to get on safely (4). A study by the University of Michigan Transportation Research Institute found that self-driving cars currently have accident rates twice as high as regular cars, generally because aggressive or inattentive humans hit them from behind, unaccustomed to vehicles that follow the law so precisely (5). But giving cars some element of “judgment” to break the law and simulate human drivers is a proposition fraught with further practical, legal, and ethical stumbling blocks.
Some experts, like Daniela Rus, the head of the Artificial Intelligence lab at MIT, believe this problem could be solved by developing technology so accurate at planning and perceiving risks that the cars would be able to not hit anyone, thus avoiding the trolley problem altogether (3). Yet, this solution seems incredibly far off – such technology is inevitably many years in the making. And this vision would most likely require the vast majority of cars on the road to be similarly self-driving so that their programming could function in harmony. Addressing the widespread fears about issues like the trolley problem would have to come first.
Still, many experts argue that self-driving cars will eventually make driving substantially safer despite the additional risks they may pose now. While these cars may not always respond to a traffic conundrum in the way we think is correct, it is worth remembering that many human drivers frequently fail to respond in the way we would want as well. Industry leaders hope is that improving technology will bring more of these cars on the roads, and lead them to become much safer. And perhaps one day in the future, self-driving cars will dominate the markets. But for today, this may be one scenario in which technology is pulling ahead of our readiness to instruct it. As Harvard psychologist Joshua Greene writes, “before we can put our values into machines, we have to figure out how to make our values clear and consistent” (6).
Caroline Wechsler ‘19 is a sophomore in Currier.
WORKS CITED
[1] Thomson, Judith Jarvis. “The Trolley Problem.” The Yale Law Journal, 94(6): May 1985.
[2] Bonnefon, Jean-Francois, Shariff, Adam, and Rahwan, Iyad. “The social dilemma of autonomous vehicles.” Science Magazine, 352(6293): June 2016.
[3] Achenbach, Joel. “Driverless cars are colliding with the creepy trolley problem.” The Washington Post. 29 December 2015. Web.
[4] Naughton, Keith. “Humans are slamming into driverless cars and exposing a key flaw.” Bloomberg BusinessWeek. 17 December 2015. Web.
[5] Schoettle, Brandon, and Sivak, Michael. “A Preliminary Analysis of Real-World Crashes Involving Self-Driving Vehicles.” University of Michigan Transportation Research Institute. October 2015.
[6] Markoff, John. “Should Your Driverless Car Hit a Pedestrian to Save Your Life?” The New York Times. 23 June 2016. Web.