Skip to content Skip to navigation

Research & Ideas

Search this site

Why self-driving cars shouldn’t be too autonomous

Two researchers explain why designers should focus on developing systems that make it easy and natural for passengers to take control in an emergency.

An aerial view of cars driving on a highway trees separating the directions

The more autonomy in the car, the less attentive the driver | Unsplash/chuttersnap

Not so long ago, science fiction imagined passengers relaxing while automated cars whisked them, carefree, to their destinations.

Now, as tech and industrial giants like Google, Toyota, Ford, Tesla and Uber pour billions into self-driving car technology, this once-distant future seems right around the corner.

But maybe not so fast. The self-driving cars of 2020 have a limited ability to sense, interpret and anticipate the whirlwind environment of the road. They find it more difficult to detect, classify and act upon information gleaned from the environment than their human counterparts, including how to accurately detect and react to unexpected challenges like a large obstacle in the road or a child darting out between parked cars.

Designers, engineers and regulators must operate under the assumption that autonomous technology is not, and may never be, perfect, and that some interaction between system and driver will always be necessary, raising an important question: How should an autonomous vehicle’s collision avoidance systems be designed? Given that the technology cannot be perfect, should it be biased toward a higher likelihood of reporting false alarms — alerts when no threat exists? Or should it err instead on the side of missing a true signal — in other words, occasionally not warning the driver when potential threats are present?

As researchers spanning the Computer Science, Mechanical Engineering, and Civil and Environmental Engineering departments at Stanford University, we investigated these questions in two recent studies, “The Car That Cried Wolf” and “Is Too Much Caution Counterproductive.” In short, we found that neither designers nor drivers should put too much reliance on AI. The more autonomy built into the system, the less attentive drivers tend to be, and the less likely they are to respond effectively should some potentially fatal emergency require them to override the system and take control.

We based our assessments of how ordinary drivers would interact with and respond under various conditions in a safe yet realistic controlled self-driving car environment: the Stanford Driving Simulator, located in the state-of-the-art Volkswagen Automotive Innovation Lab (VAIL) on the Stanford campus. The driving simulator, which builds on the work of the late Professor Clifford Nass, consists of a full Toyota Avalon with modifications that simulate on-road driving movement through haptic feedback in the steering wheel, seat and pedals — built so drivers can physically experience the rumble of the virtual road. A giant 270-degree display wall that projects a virtual driving environment surrounds the front of the car, and a rear projection enables the rear-view mirror to work. Speakers simulate road noise and provide audio alerts to study participants. Two video cameras installed inside the vehicle’s cabin monitor and record drivers’ behavior during the study.

Programming the driving simulator enabled us to analyze how study participants behaved when presented with an obstacle in the road. We started our experiments with two hypotheses: that increased automation would result in less driver vigilance; and that over- or under-sensitivity of the system would increase drivers’ vigilance.

In a series of experiments involving 96 drivers randomly selected and spanning demographics — young and old, experienced and inexperienced drivers, students and working professionals — we found that the more drivers felt like they had to pay attention, the likelier they were to respond rather than freeze in an emergency.

As self-driving vehicles become more autonomous, drivers may become more sophisticated in understanding the biases of the system, and exercise greater or lesser vigilance depending on whether the system errs on the side of under-reacting or over-reacting to potential threats. This suggests that as cars become more independent, researchers will have to pay closer attention to how system bias prompts driver behavior, and take driver preferences into account as they calibrate a system sensitivity to conditions on the road.

While we and other researchers continue to test these initial findings, we’re pursuing a number of related avenues of inquiry. For example, can we personalize a vehicle’s automated system to particular drivers based on their simulated and real-world driving behavior over time, similar to how AI and deep learning algorithms provide consumers with personalized news and shopping recommendations on the internet? Another critical area of study will be how to design automated systems that allow human operators to override smoothly in an emergency. The long-term goal is to eliminate fatal accidents and other undesirable driving outcomes as humans, machines and AI work together on the roads of the future.

Ernestine Fu, a postdoctoral scholar in the Stanford lab of Professor Martin Fischer, studies the transformative effects of technology on humanity. She also teaches CEE 326: Autonomous Vehicles Studio and CEE 214 / MED 214: Frontier Technology: Understanding and Preparing for Technology in the Next Economy. She and David Hyde, now a postdoc at UCLA, were part of the Stanford research team responsible for the two papers upon which this article is based, “The Car That Cried Wolf” and “Is Too Much Caution Counterproductive?”