Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

5 minute read

Would You Buy A Car That’s Programmed To Kill You?

The ethics of algorithms can seem like an abstract philosophical question, but in a few years, it's going to be a very real issue, with life and death implications.

Would You Buy A Car That’s Programmed To Kill You?

You are statistically incredibly more likely to die in a car crash than on an airplane, but people still fear flying more than driving. Why? Partly, psychologists say, we blow the risks out of proportion because we don’t like feeling out of control of our fate. Our life is in the pilot’s hands.

Self-driving cars will inevitably give people a similar feeling, even if they are much safer than today’s vehicles. Computers are expected to be vastly better drivers than humans, but that requires people to turn over the wheel—knowing they won’t have control over the computer’s split-second decisions if there is an accident.

One question facing autonomous car developers like Google, Uber, and General Motors is how to program vehicles to behave in life-or-death moments, especially in collisions with other vehicles, cyclists, or pedestrians.

Should the car swerve to avoid hitting a large group of pedestrians, potentially injuring or killing the car’s own occupants, doing a rapid-fire utilitarian calculus on how to save the most lives? Or should it protect the passengers inside the vehicle at all costs, as a father might do with his children sitting in the back?

With human brains, these are split second decisions that can’t really be planned in advance. But with coders who program self-driving cars, ethical decisions must be baked into the underlying algorithm and reduced to brutal logic.

Writing in the journal Science today, three researchers reported on six online surveys that show deep inconsistencies in how people view the ethics of this issue.

"Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today," the researchers, Iyad Rawan of MIT, Jean-François Bonnefon of the Toulouse School of Economics, and Azim Shariff of the University of Oregon, write.

"As we are about to endow millions of vehicles with autonomy, a serious consideration of algorithmic morality has never been more urgent."

As the authors allude to, this same question has come with other artificial intelligence technologies, including with drones and weapons that are increasingly operating without humans involved in the firing line. But autonomous vehicles are the topic that seems most salient to the general public right now—and the public is really torn about what’s the "right" answer.

Their surveys together showed that most people say they support a "utilitarian" self-driving car algorithm programmed to minimize casualties, even if that means sacrificing the car’s own occupants. But when asked about what they would like to buy for themselves, people become more selfish—especially when thinking about their family in the car. They want other people to buy "utilitarian" cars but would themselves prefer to buy cars that have "self-protective" software. (You can see examples of questions that were asked here.)

This is the classic "free rider" problem that surfaces when you talk about vaccines or environmental pollution—the exact kind of situation that the government usually steps in with regulations.

But the surveys also show that government regulation of this kind could backfire and slow the adoption of driverless cars. Most people were uncomfortable with the government mandating "utilitarian" autonomous vehicles. Worse, no matter how rare the likelihood, they said they wouldn’t want to purchase a car that could kill them by government mandate (think of how opponents of Obama’s health care reform stoked fears of government "death panels," and you can imagine the backlash). Driverless cars are going to be safer anyway, so, ironically, a utilitarian government regulator might feel he’ll save more lives by not slowing down their adoption.

The survey, of course, was based around a thought experiment. In the real world, programming decisions for self-driving cars will be much less black and white—given that it will be very hard for a coder to predict in advance what sequence of events would lead to one person’s death versus another, or perhaps just a minor injury. Rahwan tells Co.Exist the goal in the surveys was to create hypothetical scenarios that remove uncertainty, to get at people’s underlying moral thought process—in a way similar to the famous "trolley question" as used in the study of ethical philosophy and moral psychology.

Still, these kinds of intense debates of ethical autonomous vehicles could be at best premature and, at worst, a distraction.

"The ethical questions that are raised are sort [of] 'jumping the gun' since the technology is still evolving. I am yet to meet a person who had to choose between protecting oneself or running into another," says Raj Rajkumar, co-director of the General Motors Collaborative Research Laboratory at Carnegie Mellon University. "We ought to be more focused a lot more on bigger issues like hackers getting into self-driving cars and taking control," he says.

Another issue is that we assume carmakers will be transparent about the algorithms they are using. They have little incentive to be, and, as the Volkswagen scandal that showed the carmaker using software to cheat on diesel emissions tests showed, these programming decisions may be easy to hide.

"Manufacturers of utilitarian cars will be criticized for their willingness to kill their own passengers. Manufacturers of cars that privilege their own passengers will be criticized for devaluing the lives of others and their willingness to cause additional deaths," writes J.D. Greene, a psychologist at Harvard University, in a commentary on the surveys.

The survey researchers have created a website, called Moral Machine, where they will continue to crowdsource how the public feels about these moral decisions. Visitors can go and answer questions about what they would want their car to do in different life-or-death dilemma scenarios, or even design their own scenarios. You can check that out here.

Have something to say about this article? You can email us and let us know. If it's interesting and thoughtful, we may publish your response.

The Fast Company Innovation Festival