Stanford’s former provost John Etchemendy, a philosopher by training, discusses how the logical and ethical skills of his academic discipline are needed to shape the future of artificial intelligence.
In his own words, he wants to maximize the good AI can deliver, while minimizing the bad outcomes that could result as this powerful technology evolves.
To achieve that goal, Etchemendy helped found and now co-directs the Stanford Institute for Human-Centered Artificial Intelligence (HAI), an interdisciplinary team of experts from across Stanford. They aim to influence and guide the future of AI by augmenting Stanford’s leadership in AI, computer science, engineering and robotics with expertise from medicine, law, social sciences and the humanities.
Etchemendy says his goal is to ensure that AI reaches its full potential to enhance human capabilities and enrich human lives. He discussed his vision for human-centered artificial intelligence on the latest episode of Stanford Engineering’s The Future of Everything podcast with bioengineer and host Russ Altman, which was recorded in early March.
Russ Altman: Thank you. It’s great to see everybody live in the auditorium today. Today on “The Future of Everything,” it’s the future of human-centered artificial intelligence.
We all know that AI has exploded. We have face recognition routinely being used on our cameras and on our phones. We have assistants like Siri and Alexa speaking to us, giving us advice, listening to us. We have self-driving cars that we’re increasingly seeing on the roads, satellite analysis of the Earth, looking for areas of drought, areas where there may be illegal activities, areas of poverty. We have robotics, in addition to the cars. We have things around our homes, and we have social networks that can tell us who our friends are, or at least who our friends should be. All of these are useful, and they’re all working their way into our everyday life.
Some people are resisting more than others, but it’s all starting to be part of life. There’s always other uses of AI. The facial recognition makes us worry about a surveillance state. The speech assistants are listening, even, perhaps, when we don’t want them to listen. The robotic devices in the cars, we worry about what happens if they do something we don’t want them to do, or somebody gets hurt. The satellites, we worry about privacy, and not only the satellites, but the little flying things — the drones. Some have warned of dire outcomes, that this is going to lead to the “Terminator” state. Well, we’ve already had the “Terminator” state, but this is going to lead to an AI rebellion, and control of humanity — and these are reasonable people who have, you can decide if they’re reasonable arguments, but in the other ways that I mentioned, there’s still legitimate concerns — and so as a result, there’s emerged this idea of human-centered AI, and that’s what we’re going to be talking about today.
John Etchemendy is a professor of philosophy at Stanford. He’s the former provost at Stanford University, and most relevant for today, he’s the co-director of the Stanford Institute for Human-Centered AI, and I’d like to invite John up right now. Please join me in welcoming him.
John Etchemendy: Thank you, Russ.
Russ Altman: Thank you, so John, it’s easy to say that AI should be human-centered, but can you help us break down what is AI, and then what would human-centered AI be to you?
John Etchemendy: Okay, what is AI? I could give a very general definition of AI, but I won’t. What I’m going to do is say something about current-day AI because that is what most people are talking about. That’s what most people are thinking about: so current-day AI is really, basically, characterized by deep learning, neural net architectures and so forth. It is what is responsible for some of the amazing applications that you’ve mentioned, and that is basically what we currently mean by AI. Now, it’s much less general than what we mean more generally about AI—
Russ Altman: This is the revolution in machine learning —
John Etchemendy: Exactly.
Russ Altman: — that in the last five to 10 years has really made remarkable strides.
John Etchemendy: Remarkable strides. Now, I want to emphasize something about current-day AI, and that is that it is simply a way of programming computers to get them to perform functions that were infeasible using traditional programming techniques. That’s all it is. That’s a lot because it allows us to do things like facial recognition, which we could not do well using traditional techniques, but it’s nothing more than that, currently.
Russ Altman: That’s the big advance, and so tell me about human-centered AI.
John Etchemendy: Human-centered AI, what we mean, at the Institute for Human-Centered AI, we mean, really, three things when we use the human-centered terminology, and these three things correspond to three focuses of our institute. One focus is on the technology itself: we have a technology. It’s the technology I just referred to —you know, deep learning, neural nets— and it is an amazing technology in the sense that it allows us to do some really quite remarkable things with computers, but it is very, very limited in the sense that it can be applied simply to very narrowly specified domains, requires large amounts of data in order to train, and it, in fact, produces results, algorithms, that in many ways are brittle, that is, they break down in ways that are unexpected. They are very non-humanlike.
Russ Altman: Right, so the performance might remind you of a human, but the underlying wiring is not how our brains do it.
John Etchemendy: Right, it’ll perform like a human for the first 1,000 cases, and then it does something bizarre, right? Then it fails to recognize something because the image has been altered in some very trivial way, so it is not human-like in that respect, in all of those respects, so a human intelligence is amazing in that it can learn these kinds of tasks, recognition tasks, with several orders of magnitude less data than is required by neural nets when you train them. It tends to be much less brittle than the algorithms that we’re producing; and a number of other ways in which it’s very, very different, so the first sense of human-centered is really human, the search for human-like intelligence. What is the next breakthrough that we need in artificial intelligence to move the technology to the next generation, to improve on that technology, to make it more human-like?
Russ Altman: So this sounds, to me, like you might even be thinking about having conversations with neuroscientists.
John Etchemendy: Exactly, and we believe — we’re making a bet, and that bet is that we will find that the inspiration for the next generation in looking at our growing understanding of human intelligence, both from the neurosciences, from cognitive sciences, cognitive psychology, possibly from philosophy, for that matter, so we think —
Russ Altman: Professor of philosophy.
John Etchemendy: Right. So human-centered in that sense. We’re looking to push the technology forward, and we’re looking in a particular direction. That is, toward the human, the understanding of the human —
Russ Altman: So this human-centeredness is centered on trying to be a little bit more like biological human reasoning systems.
John Etchemendy: Exactly.
Russ Altman: Okay.
John Etchemendy: Now, that’s one sense. Another sense, the second sense, and the second area of focus of our institute, is on the impact of AI. We believe that AI, the current technology, as limited as it is, is going to affect everything we do. It’s going to have impacts on every piece of our lives. It already has, whether you know it or not, and we need to understand what the impact is going to be. We need to look at its impact on humans, on society, on cities, on the workforce, the economy, international security, all of these ways in which it’s going to have an impact, we need to understand —
Russ Altman: So this is distinctly non-technical. [.../…] I’m sorry, that’s not fair. Distinctly non-computer science.
John Etchemendy: It’s not particularly computer science, although I believe that there’s an important interaction there between the computer scientists and other scientists in the, for example, social sciences, so we need to understand the impact, and we need to anticipate that impact, and figure out how to deal with it to maximize the good and minimize the bad.
Russ Altman: This is “The Future of Everything.” I’m Russ Altman. I’m speaking with John Etchemendy. We are learning about the three pillars of human-centered AI, and I believe we’ve just covered two of them. To review, human- and biology-inspired methods for new technologies, and number two, making sure we understand the broad impacts on humanity of AI technologies.
John Etchemendy: Right, and the third is on the application of the technology. We, as an institute, focus on ways in which the current technology, and hopefully future technology as well, can be used not to replace humans, not to take, for example, a human and produce a robot that does that human’s job, but rather to enhance humans, to extend what humans can do, to enrich human lives; so we’re prototyping many, many applications, in the medical sphere, this medical school, or in the Earth sciences, and so forth and so on, to understand ways in which we can apply this technology to improve human lives, to enhance what humans can do, to build tools to allow humans to do things that they could not do before.
Russ Altman: So let me ask you about that. Let’s just pause for a moment because one of the big negative headlines about AI is the loss of jobs. Do you believe that the loss-of-jobs argument is a red herring, or they’re thinking about it the wrong way? Because you said we want to “augment:” Should we tell the drivers, “We’re not going to replace you; we’re going to augment you,” or is that just not being honest? I’m trying to understand that.
John Etchemendy: Not being completely honest. What’s going to happen, I believe, is that it will have a huge impact on the economy. It will have a huge impact on jobs. It will be slower than people are currently anticipating. It’s not going to be overnight. Drivers are not going to be replaced overnight. On the other hand, there will be disruptions.
Now, in every technological revolution — and what we are experiencing is a technological revolution — in every technological revolution, the numbers of jobs have increased, so it’s not that there are fewer jobs. It’s not that people are, all of a sudden, unemployed because we’ve been replaced by robots. What happens is that there are new jobs that were not there before, and they tend to be more jobs, partly because the technological revolution creates an increase in productivity, which makes the whole society wealthier, richer, and we do more, we provide more services and more goods. Now, that doesn’t mean there won’t be disruptions. There are going to be disruptions. That is to say, there will be job categories that end up being replaced, and there will be people in those categories who are, perhaps, not, maybe they’re like me: they’re too old to be retooled, right?
Russ Altman: I’m with you.
John Etchemendy: And that’s something that society has to worry about and think about. I do not, for a minute, believe that we will end up with no jobs and lots of robots doing things for us, although, when you think about that, what is that? That’s infinite productivity, right?
Russ Altman: Thank you very much, so that does give a very coherent understanding of how the human part of human-centered AI really does, I would say, focus the agenda, focus the research and policy agenda in a sensible way. One of the things, just building off your last comment about there will be some people whose jobs are no longer relevant, and we’re going to have a duty to them for, perhaps, retraining, so that raises the issue: Can AI itself be of assistance in education? So are educational applications to AI part of the HAI mission?
John Etchemendy: Yeah, so —
Russ Altman: And I know, by the way, that early in your career you were involved in creating software for education. I don’t know if that is related to how things have played out, but —
John Etchemendy: So let me, actually, put that answer on hold. Let me say something about the replacement because I really believe that what we’re going to see more and more is that jobs will become better. Now, which is not to say that there won’t be displacement, but jobs will become better. Think about the long-haul truck driver. I suspect that that, within five or 10 years there won’t be long-haul truck drivers driving across country, but —
Russ Altman: At great risk to their health, I think is documented.
John Etchemendy: Yeah, yeah, but I think we will, for a much longer period of time, need the short haul, the last-mile drivers that will then move the goods from the highway, the restricted highway, and move them into wherever they’re being delivered. Now, think about that. That actually makes the truck driver’s job better. You don’t have to leave home for five days at a time. You can sleep in your own bed at night, and your job is really moving, driving locally, so I think there will be a lot of enhancement of jobs because the tedium, the tedious part will be replaced. That’s the first part that’s going to be replaced.
Now, as far as retraining goes, [.../…] computer-assisted education has been a disappointment. I’ve been involved in it for I don’t know how many years, 35 years probably, and it has been, in general, a disappointment. It’s been a disappointment partly because the people who have the most difficulty with education, the ones that you would want to deliver the education to, often require the motivation that they get from an individual person —
Russ Altman: A tutor.
John Etchemendy: A tutor, whatever —
Russ Altman: Somebody who cares.
John Etchemendy: Right, and so you find that the people who are very good at taking a course online tend to be the ones that are, you know, they’ve already got a bachelor’s degree or a master’s degree. They want to pick up a skill. They can sit down, and spend the time, and focus. Now, can we improve on that?
What education really is, the highest-quality education requires high-bandwidth two-way communication. It requires not just somebody presenting information. It requires the ability to listen to where the student is, what the student is understanding, what the student is not understanding, and taking that in, and then modulating what you’re providing back, and that’s the way the best kind of one-on-one instruction works, or small-class instruction. Can we —
Russ Altman: Let me just say that this is “The Future of Everything.” I’m Russ Altman. I’m speaking with John Etchemendy, who is in mid-sentence, describing to me the opportunities for retraining.
John Etchemendy: Can we, Russ, use AI to increase, to allow that kind of two-way, high-bandwidth interaction? Now, what’s going to be required is very much human-centered AI in the sense that you’re going to have to have systems that recognize when a student is not following, that recognize when a student has gotten bored, recognize when the student fell asleep, right?
Russ Altman: Very human cues, yes.
John Etchemendy: Very human kind of cues, exactly, and so, with the new techniques, we have the possibility of succeeding where we have failed so often before, and of providing computer-delivered education that really works for a larger group of people.
Russ Altman: And it occurs to me that the idea of this tutor who is extremely tuned into the realities of being a human, that technology for a teacher will also be good for another pending crisis in our society, which is care of the elderly, very similar. Are these, in your mind, priority research areas, or are they so hard that they’re probably not the first things for us to approach in a research program for HAI?
John Etchemendy: They’re probably very hard in the sense that if you want to sort of, say, elder care, for example, if you want to completely replace the humans providing the care to the elderly, people like us soon, that’s going to take an awful long time. That’s a hugely difficult question, difficult problem, but there are things that can be done to enhance what human caregivers can do without replacing them, things like observing, providing devices that can observe when the elder has fallen out of bed, or has fallen and hurt him or herself. There are ways to use this technology that are going to allow us to provide things like elder care much more broadly. Fewer people providing elder care to larger numbers of people, and guess what? With those of us in the baby boom generation aging, that’s what we’re going to need.
Russ Altman: This is “The Future of Everything.” I’m Russ Altman. More with John Etchemendy about human-centered AI and the prospects for the future next on SiriusXM.
Welcome back to “The Future of Everything.” I’m Russ Altman. I’m speaking with John Etchemendy about human-centered AI. We were just speaking about retraining and the challenges of deep human interaction, both in education and in elder care. I want to turn now to a worry that a lot of people have about AI that makes them worry that it won’t be human-centered, which is the gap in access, and kind of justice issues, and who gets to use AI to augment their life, and who kind of is left out, and this is a pressing question because so many technologies benefit a small group initially, and it takes a long time for others to benefit. Do we see this happening with AI, and how do you think about the issues of access to AI technologies, equity, fairness?
John Etchemendy: Yeah, so let me say a couple of— There’s a lot that I could cover under that heading. One is, first of all, with equity and access, to a certain extent we’re seeing AI at every possible level, and do we have broad access to AI? Yes, you know, everybody who has a smartphone has access to certain AI tools, and that’s just given to you, and that’s not —
Russ Altman: And I might add, every Google search that you ever do —
John Etchemendy: Every Google search.
Russ Altman: — is assisted by AI, and every purchase decision, whether you like it or not, is nudged by AI.
John Etchemendy: Now, the question of whether there will be AI tools that are incredibly expensive, no doubt there will be. There will be, for example, cars that are incredibly expensive that use AI, but I think an awful lot of this technology, because it’s, remember, it’s computer technology, and one thing good about computer technology is that it is replicable, that is, the incremental cost of a new instance of whatever, Siri, is very, very small, and that means that we can provide it broadly to people.
Now, there’s another sense in which I worry about, so there is some worry, but I don’t lose sleep at night about that. There’s another sense in which you could talk about access, and that is access to the technology, to developing the technology, and there, I think, one of the things that Fei-Fei Li, my co-director, has been very active in is the attempt to teach students, particularly students of color, women students, how to use AI, and how to be producers of AI, and I think that is extremely important, and it’s extremely important because, as we’ve seen over and over again, the more diversity you have in a particular area, the more you’re going to see innovative applications that you would not see otherwise.
Russ Altman: So there’s a broadening activity.
John Etchemendy: It’s a broadening activity, and I think that that’s something that we need to worry about, and AI for All, which is this organization that Fei-Fei originally was one of the founders of, and is now at many, many different sites across the country, is devoted to that. They teach high-school kids that might not otherwise have access to the kind of curriculum that they need to learn how to use AI.
Russ Altman: This is “The Future of Everything.” I’m Russ Altman. I’m speaking with John Etchemendy about AI, and John, you have this institute, and I wonder how is the business world responding to these challenges? For example, are they happy that an academic institution like Stanford is mounting this effort? Do they feel like they have it all covered, they know what they’re doing? I guess there are some companies that have AI as a core part of their mission. Others that are flirting with it, maybe they’re AI-curious. What is your sense of industry’s take on AI, and maybe even particularly these HAI approaches that you’re advocating?
John Etchemendy: I think, first of all, there are different industries, and different companies within the different industries, and they all find themselves at different points in the AI spectrum, and so many of the technology companies in the valley are working fast and furious on AI, and they are advancing the technology. They do not have the kind of breadth that a university has, so none of them is capable of bringing together the kinds of disciplines that we bring together in our institute, so —
Russ Altman: That seems like it would be particularly true for that impact assessment part, the broad impacts part. They might not have sociologists, cognitive scientists, neuroscientists.
John Etchemendy: Right, I mean, that’s not just the impact part, but also pushing the technology forward. They don’t have the depth of neuroscience that we have here. They don’t have the depth of cognitive psychology that we have here. They don’t have the medical school, the business school, law school, education school, so all of these disciplines that work together on AI, let me say one thing about the way we conceive of AI at HAI. We view the discipline of AI as no longer a narrow computer science discipline. The discipline of AI has now broadened across the entirety of the university, in this case of the entirety of the disciplinary breadth. Every discipline touches upon AI in one way or another.
Russ Altman: So let me go back to my question. I still want to find out, are they excited about this? Do they think that you are solving problems that they have? Or are they like, “Yeah, good luck with that. “We know what we’re doing with AI, “and we don’t really, you know, “we’ll read about what you do in the future.”
John Etchemendy: Some of them probably think that, yeah, and then others don’t, so that was on the technology side. What I didn’t answer on — There are other ways in which the reaction of industry has been extremely positive, and that is that they’re concerned about society’s reaction to the development of both technology, the technology companies, and AI technology in particular, and they are very concerned with the public, and not just the public, but legislators, judges, and so forth and so on, journalists, understanding the technology; one of the things that we try to do at HAI is provide external educational programs to educate, for example, congresspeople, or people from the EU, or judges, or journalists, about what AI is. What is the current state of AI? What’s the hype? What’s the reality? And that is something that, pretty unanimously, the industry has been very positive about because, you know, they’re fine if they are criticized on an intelligent basis, a knowledgeable basis, but the problem is that so much of the criticism is based on a lack of knowledge of the technology.
Russ Altman: So I just want to highlight that because you just listed a whole bunch of very influential groups. I think you said judges, congressional staff, so that’s very exciting, and that implies that your efforts can start to, if not write policy, certainly influence policy through education. Can you give us a sense: What is on the curriculum for congressional staffers about AI? I’m guessing you’re not teaching them how to build a deep network and do face recognition, so what do they need to know, and how do you figure that out?
John Etchemendy: Well, we actually think that it is helpful to know, at a certain level, not to build a deep, but at a certain level understand, What is this technology? And how is it working? And what are its limitations? And I think that’s important.
Think about the discussion of facial recognition technology that’s out there, right? I mean, there’s been a lot of discussion lately, and a number of people who have said, “Well, facial recognition technology” ought to be banned, period, because it’s bad. “I mean, it has, look, it’s biased, you know, “and there are these cases “where it doesn’t recognize certain subgroups very well, “and by the way, “look at what’s being done to the Uyghurs in China, “and it being used as a surveillance technology, “so it’s bad in all of these various different ways. “It should be banned.”
The problem with that is that it’s looking at, you know, at that level, a technology certainly should not be banned. It has good uses, good applications, and bad applications, and the assessment has to be done at the application level, and you would come to different conclusions, depending on what the application is; so for example, if you have a facial recognition system that doesn’t, so my family is Basque — very small subgroup in the United States, right? — So suppose you had a facial recognition system that was very bad at recognizing Basques, right? So it couldn’t recognize me, couldn’t identify me. Now that might be horrible if it was a system that was being used, suddenly Stanford implemented it, and the only way to get into your office was for it to recognize you, and it just didn’t work, right? That would be horrible for me.
On the other hand, if the government is using it for surveillance, I’m happy if I’m part of one of these subgroups where it doesn’t work, right? It depends entirely on the application, so what we’re trying to push forward is a knowledgeable sort of understanding of the technology, and an understanding that what we need are criteria for assessing the accuracy, and so forth and so on, and perhaps standards for application-specific standards. How much, how good should it work for this application or for that application?
Russ Altman: And that’s the direction that I’m sure we’ll be going. I wanna thank you for listening to “The Future of Everything.” I’m Russ Altman. If you missed any of this episode, listen anytime on demand with the SiriusXM app.