Skip to main content Skip to secondary navigation
Main content start

Russ Altman and Fei-Fei Li: What is the future of artificial intelligence?

Two of the technology's leading experts discuss ​some of its most promising social applications and the role flexible policies will play in its successful implementation.
"We’re at a historical moment for AI." | iStock/iLexx
"We’re at a historical moment for AI." | iStock/iLexx

The future of artificial intelligence is now. After years of steady progress in making computers “smarter,” AI prototypes are being incorporated into hundreds of day-to-day actions, such as self-driving cars, intelligent smartphone assistants and several applications in academia, government and industry. As the technology improves, it will be applied in ever more high-impact economic, social, political and cultural areas.

On the eve of a Stanford and White House Office of Science and Technology Policy panel on the subject, we spoke to two leading experts: Russ Altman, professor of bioengineering, of genetics, of medicine and, by courtesy, of computer science, and the faculty director of the One Hundred Year Study on Artificial Intelligence; and Fei-Fei Li, associate professor of computer science and, by courtesy, of psychology, and director of the Stanford Artificial Intelligence Lab. Excerpts:

It seems society is on the cusp of reaping many benefits from AI-assisted technologies. What are some of the secondary implications that we need to prepare for?

Altman: My work is in medical and biological information. DNA sequencing has outstripped Moore’s Law for the last few years, and so biology is providing data that is far beyond the ability of humans to fully analyze. This is a great opportunity for AI to help advance our understanding of health and disease. The ability to integrate data about patients from three streams – genomics, personal monitoring and electronic health records – to more accurately diagnose them and choose treatments is extremely exciting. But we need to make sure that these capabilities are distributed in a just way so that it does not just benefit a small fraction of society. Now is a good time to think about the anticipated evolution of these capabilities and how they might impact economic, social, political and cultural activities. There have been some naysayers and so there is also a sense that the public should get a balanced view of the benefits and costs of AI technology.

Li: We’re at a historical moment for AI. After 60 years of AI being a largely academic research discipline, AI algorithms have become useful and powerful enough to tackle real-world problems, from text analysis to image understanding to decision making. These advances are already being implemented and will be increasingly impacting many industries. Self-driving cars are an example for transportation. Precision medicine is an example for health care. Because of the large technological advances AI can make, it is an important time to consider the socioeconomic consequences of these changes. Although self-driving cars can save lives, they also change the landscape of jobs involved in driving. Machine-learning-driven medical diagnosis can be more efficient, effective and cheaper, but it also raises questions regarding liability, ethics and roles of doctors. With all these changes on the horizon, it is exactly the right time to think about AI not only from a technology point of view, but also economical, social, legal and ethical point of view.

Do you think there will be a one-size-fits-all policy for AI, or will we need to have flexibility to tailor policy to different applications?

Altman: I think that AI applications will exist within different established industries, and each industry will have to assess its preparedness and the degree to which current operating assumptions and procedures are ready to accept AI technology. In some settings, there is a mature set of operating procedures which should be able to incorporate AI using mostly existing frameworks – such as regulation of diagnostic tests, new drugs and devices by the FDA. But in other settings, the rapid advance of AI may be disruptive and require rapid response from the industry to reset operating assumptions.

Li: AI is a large field; it is a heterogeneous collection of many subfields which are huge in their own right. AI technologies that impact transportation cannot be regarded in the same way as AI technologies that help children to learn better. So, I don’t think there is a one-size-fits-all policy for AI, just like there is no one-size-fits-all policy for electrical engineering or energy.

In areas where AI policies and standards are still in flux, what are some of the challenges to getting everyone on the same page?

Li: One big challenge is the lack of understanding of AI in our society at large, including industrial leaders, lawmakers, public officers and the general public. Because of this lack of understanding of the scientific method and progress of AI, the mood tends to swing from overly optimistic to overly pessimistic. It is important that the media play an important role to help bridge the communication gap between AI researchers/academics and the general public. It is also important for leaders of our society to learn more about AI and become informed of its concepts and implications. There is neither the Terminator nor Baymax coming next door soon.

The Stanford-White House Office of Science and Technology Policy will be live streamed at https://www.facebook.com/stanford.engineering and live tweeted by @StanfordEngusing the hashtag #FutureofAI. For more details, visit https://events.stanford.edu/events/610/61015/

Related Departments