Skip to content Skip to navigation

Research & Ideas

Search this site

Marco Pavone: Autonomous cars will have to manage unpredictable road behavior

A researcher explains how we can program them to make the decisions necessary to avoid a collision.

Freeway interchange

On a busy freeway, how does a self-driving car make the right decision? | iStock/Art Wager

It’s difficult enough to design robots that can operate independently in space or explore asteroids and distant planets.

It can be even more daunting to develop control systems for self-driving cars on terra firma, where failures have immediate and dire human costs and consequences, says Marco Pavone, an associate professor of aeronautics and astronautics.

Pavone, who has designed autonomous systems for space exploration in collaboration with NASA, is now using artificial intelligence to develop planning and decision-making algorithms to help autonomous cars make better, safer life or death decisions on the road. He was recently invited to talk about autonomy at the Robotics Science and Systems Conference, which brought the world’s leading thinkers on the topic to Freiburg, Germany. We met with him afterward to distill some of his insights.

Why is it harder to control an autonomous car on Earth than a robot in outer space?

In space, robots face many challenges: dealing with dust, rocks, debris, extreme environmental conditions and so forth. But for the most part those things behave the same way every time you come upon them. People aren’t quite so predictable. We have free will. Not everyone makes the same decision every time, so you can’t predict what the human driver in the next lane will do with absolute certainty every time. The challenge is to make sure that the control system for an autonomous car ensures safety regardless of human unpredictability. For example, I ride a motorcycle. As a collision on a bike is a very serious thing, when I ride my motorbike, I’m extra careful not to put myself in a situation where I am boxed in and, if a driver is distracted, a collision becomes unavoidable. We’re trying to program autonomous cars with a level of situational awareness that makes collision avoidance a priority.

How did you make it work?

Decision making for human-robot interactions is typically based on probability; that is, it essentially relies on educated guesses. And guesses sometimes can be wrong, which is not good in terms of safety. My lab specializes in a type of algorithm that detects when a guess is wrong and takes over in a way that is more defensive in nature — specifically, in those situations, we give up on guesses and try to characterize the worst-case scenarios, giving an autonomous car the ability to avoid a collision regardless of what that human in the next lane does. It’s a lot like what any of us do when we drive. We make a lot of guesses on how humans around us are going to operate. And if one of our guesses is wrong, then we make some emergency maneuver to avoid the collision by assuming the worst possible behavior from the other driver. Such a reasoning is done several times a second — continually updating the autonomous car’s possible responses as a scenario plays out.

How do you test such a system?

We have deployed our algorithms on a self-driving car furnished by the Center for Automotive Research at Stanford (CARS). We did this on a test track. We haven’t yet done a true road test on public streets, but that day is coming. Our goal in these early tests has been to abuse the algorithm, to see how our system would respond to the craziest driving possible. Rather than risk a real car with a human driver, we used an inexpensive remote-controlled car as our challenge vehicle. Then we tell the operator of the remote-controlled car to drive in a completely unpredictable manner. We want to see how our algorithms work in the worst cases to prepare the autonomous car for anything the real world might bring.

What’s your biggest challenge as an engineer?

Finding the sweet spot between safety and efficiency. It’s not difficult to design an autonomous car that is extremely safe. You just tell it to drive very conservatively at a low speed, yielding to everyone and everything on the street. But people wouldn’t use such a car on a daily basis. It would take too long to get places. On the other hand, it’s also relatively easy to design an autonomous car that is fast and efficient, and maneuvers swiftly and smoothly in traffic, but then safety becomes a concern. Our goal is to design autonomous cars that surpass human performance by achieving the optimal tradeoff between safety and efficiency, as outlined in a recent paper written by members of our team.

Other Stanford co-authors of the paper are J. Christian Gerdes, a professor of mechanical engineering, director of the Center for Automotive Research at Stanford, director of the Revs Program and a senior fellow at the Precourt Institute for Energy; graduate students Karen Leung and John Talbot; and former graduate students Ed Schmerling and Mengxuan Zhang. Mo Chen of Simon Fraser University also contributed to the paper. This work was funded in part by the Toyota Research Institute.