I want to put robots in every home in America. And not just in homes – in the skies, on our roads, and even on other planets.
My fascination with aeronautics and astronautics began with drones. When I started my PhD in 2015, organizations like Amazon, NASA, and the Federal Aviation Administration were just beginning to seriously explore mass-scale drone delivery, like having packages delivered onto our roofs by drones. The possibilities felt almost science fiction. Fast-forward 10 years, and we’re now thinking about flying cars. This is truly a very exciting time to be working in robotics and autonomy.
Today, I explore these questions as an assistant professor of aeronautics and astronautics at Stanford and as the founder and director of the Safe and Intelligent Autonomy Lab. I launched the lab in 2021 at the University of Southern California and later brought it with me to Stanford. Our mission is clear: Enable autonomous systems to harness the power of modern machine learning while maintaining rigorous safety guarantees. We want robots that are not only intelligent, but provably safe – systems that will not harm themselves, humans, or the world around them.
Safety sits at the core of our work because the next generation of robots will not operate in isolation – they will interact with humans and other robots. Autonomous vehicles share roads with human drivers. Factory robots collaborate with workers. Assistive robots may one day help us cook dinner or care for loved ones. At the end of the day, robots are machines, and when machines interact with humans and make mistakes, the consequences can be severe. Unlike passive AI systems, autonomous robots act in the physical world – and that physicality raises the stakes. If we truly want robots in every home, we must solve the safety question rigorously.
Our research is inherently interdisciplinary. We work broadly at the intersection of machine learning, robotics, and control theory. Our team includes nine PhD students, several master’s and undergraduate researchers, and collaborators with a background in computer science, aero/astro, electrical engineering, and even mathematics. Together, we’ve developed a suite of tools to analyze, evaluate, and verify the safety of AI-enabled autonomous systems. We stress-test our systems in simulators using physics models and do real-world testing in controlled environments. For example, we collaborate with several autonomous vehicle companies on deploying our algorithms into real car stacks.
At its heart, my work translates messy real-world robotics problems into mathematical language. Mathematics becomes the abstraction layer between the physical robot and the algorithms that govern its behavior. That is why my favorite course to teach at Stanford is Principles of Safety-Critical Autonomy. In that class, we take informal notions of safety – verbal descriptions of risk, uncertainty, and responsibility – and convert them into analyzable mathematical formulations. The students come from remarkably diverse backgrounds: Some have engineered warehouse autonomy systems, others have worked on spacecraft docking with NASA. What unites them is a shared commitment to safety, even though “safety” means something different in each domain. Exploring those differences – and designing principled solutions – is one of the most rewarding parts of my job.
Stanford is a remarkable place to pursue this vision. For example, the Stanford Robotics Center, housed in the basement of the Packard building, brings together roboticists across disciplines. Every visit reveals something new: surgical robots, humanoid systems, soft robotics, even ingestible robotic devices capable of targeted drug delivery. The density of ideas and ambition is extraordinary.
I also believe in experiencing the technology I study. When Waymo launched in Los Angeles in 2022, I rode in one of the early vehicles myself. At the time, I had just completed my PhD and was working as a research scientist at Waymo before starting my faculty career. With a safety researcher’s mindset, I entered that first ride prepared to critique every detail, but honestly, it blew me away. Recently, when my father visited, I took him to the Golden Gate Bridge and asked if he wanted to try an autonomous ride. Waymo drove us through some of the most complex streets in San Francisco to Ghirardelli Square. Afterward, he turned to me and said he was certain a human operator must have been remotely controlling the car. When I explained that it was fully autonomous – sensing, planning, and acting in real time – he was so impressed. And shocked! Moments like that remind me why this work matters. Autonomous systems are no longer theoretical. They are here. The challenge now is ensuring that as they become ubiquitous – in our homes, cities, skies, and beyond – they are systems we can trust. That is what drives me every day.
Related spotlights
Michelle Ho
Juliette Woodrow