Skip
Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

6 minute read

What's Really Happening When Self-Driving Cars Turn The Wheel Back To Humans

Are driverless cars really safer if they hand over the controls in dangerous situations?

What's Really Happening When Self-Driving Cars Turn The Wheel Back To Humans

Are self-driving cars cool enough to handle an unexpected situation ?

Flickr user Ethan

Google, Tesla, Nissan, Mercedes, and several other car-makers have just reported the number of "disengagements" that their test driverless cars underwent last year. Disengagements occur when either the human driver takes control from the car, or when the car notices a problem and asks the driver to take over. The reports are required by California law from those actively testing driverless vehicles, and the numbers cover September 2014 to November 2015.

Google, the biggest tester of autonomous cars in California, reported 341 disengagements. In total, the seven companies reported 2,894.

Flickr user Jose Camões Silva

At first glance, this seems terrifying. After all, we’ve been comforted by Google’s figures that its cars haven’t caused a single accident in millions of miles over several years. And yet now it looks like the cars are regularly failing, unable to make a simple trip without panicking and handing the wheel back to the human in charge. What’s going on?

First, disengagements aren’t quite the disaster some prefer to see. Consumer Watchdog's John Simpson issued a panicked press release at the news:


How can Google propose a car with no steering wheel, brakes or driver when its own tests show that over 15 months the robot technology failed and handed control to the driver 272 times and a test driver felt compelled to intervene 69 times?

He even accused Google of using "our public roads as its private laboratory," although one wonders how autonomous cars will improve if they can’t be tested on real roads.

Google’s own report details the real criteria for a disengagement, and—surprise—it doesn’t mean that the car was about to run over a child who had just leapt into the road ahead.

The official rules state that a disengagement must be reported "when a failure of the autonomous technology is detected," or, "when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate control of the vehicle."

"Cars in autonomous operation win right now because they have caused no collisions," the AAA's Manager of Technical Services Mike Calkins told Co.Exist. "In a sense, current autonomous driving technology is already 'better' than human drivers because it recognizes when it could get in over its head and asks for help—people are more likely to just run into something."

Google’s report says that disengagements are almost encouraged, as a learning tool. "Our objective is not to minimize disengagements; rather it is to gather as much data as possible to enable us to improve our self-driving system." This refers to disengagements requested by the car, which it does by flashing and sounding an alarm. These, say Google, are usually technical problems—communication failures between primary and backup systems, or anomalies in sensor readings. A look at Google’s figures show that the trend for this type of disengagement is downward, with fewer every month, from 785 miles per disengagement in the fourth quarter of 2014 to 5,318 miles per disengagement by the end of 2015.

Flickr user Hani Arif

The other kind of disengagement is when the driver wrests control back from the car. This is the scarier kind. What could make the driver do such a thing? Google reports that most of these are cautious moves. When a bike is behaving erratically near the car, for example, or when the car is getting a bit jerky in traffic.

Any time this happens, the events leading up to the incident are studied to see if the car would in fact have hit something. Of the 69 driver-initiated disengagements, 13 would have caused "contact." Of these 13, 10 would have been the fault of the car and three the fault of another driver.

These events are anomalies, and Google says that "because the simulated contact events [possible crashes] are so few in number, they do not lend themselves well to trend analysis, but we are generally driving more between such events."

What does this mean for the real world use of autonomous vehicles? We should clearly hold them to higher standards than we do human-controlled cars, if only because our standards our so low. We’re terrible drivers.

"When will we have a car that you simply tell where to go and it takes you there automatically?" said Calkins. "It depends on who you ask, and there are many different opinions, but Google’s report is unlikely to change any of those estimates; it simply describes a normal and expected part of the development process. It’s also probably too premature to tell if these projects are 'on track.' If they have the same number (or ratio to VMT) of disengagements in three or five years, that might suggest a potential problem."

I asked the AAA for any figures that may be comparable to Google’s Disengages Per Mile measure."Of course, there aren’t real data we can use to compare this experience with that of self-driving cars," AAA’s Brian Martin told me, but he provided figures for death and injury on U.S. roads in 2013.

If we take Google’s figures, we have 13 disengages that would have resulted in a crash if the driver hadn’t intervened. They were spread over 424,331 miles, so that’s one (avoided) accident every 32,641 miles.

Let’s compare that to the figures for people: We have 77 injuries per 100 million vehicle miles traveled, or one injury per 1,298,701 miles.

Google comes off surprisingly well in this. Remember, the Google cars never crashed, and the AAA figures are for not just crashes, but for crashes where people got hurt. Further, 100% of autonomous disengagements are supposed to be recorded, whereas who knows how many regular crashes go unreported? One estimate from the U.S. Department of Transportation puts the figure for unreported crashes at 60% (property damage only) and 24% (injuries).

Let’s apply the DOT estimate to the official figures. We’ll use the numbers for crashes involving injuries, as that’s all we have from the AAA. This gives us one injury per 1,047,340 miles, putting Google at roughly half as good. And remember again, the autonomous cars haven’t yet caused one crash.

Still, these figures are only useful as a fascinating guide. "Crash rates, which society generally looks to as an indication of driver safety, aren’t as reliable a measuring stick as they might seem at first glance," writes Google’s self-driving car director Chris Urmson. "This is especially true of the most common types of collisions: the minor fender benders that happen all the time on city streets."

Flickr user Tony Alter

The problem faced by autonomous cars is that we’re scared of them. Being killed or injured by a robocar is much more frightening than the possibility of being hit by a commuter distracted by their cellphone, even though we’re likely to end up safer in a driverless car.

"On our test track, we run tests that are designed to give us extra practice with rare or wacky situations. And our powerful simulator generates thousands of virtual testing scenarios for us," writes Urmson. That is, engineers are actively poking at their machines to force errors so they can be studied. That’s a lot better than human drivers who are left to their bad habits once they get a license.

Which brings us back to the scaremongering comments of Consumer Watchdog's John Simpson. Google may, as Simpson says, use "our public roads as its private laboratory," but as Google’s Urmson points out, "this stands in contrast to the hazy variability we accept in experienced human drivers— never mind the 16-year-olds we send onto the streets to learn amidst the rest of us."

loading