Skip to main content Skip to secondary navigation
Main content start

The future of the innovation economy

Three experts spanning artificial intelligence and economics explore the implications of AI and how it could transform creativity, jobs, education, and public policy through the rapidly evolving “innovation economy.”
From left to right, Susan Athey, Neale Mahoney, Fei-Fei Li, and Russ Altman sitting together on stage for a live recording of The Future of Everything podcast.
AI is enabling tremendous innovation, but we need to focus on its impact on humans to have the best outcomes. | Herschell Taghap

In a special Future of Everything podcast episode recorded live before a studio audience in New York, host Russ Altman talks to three authorities on the innovation economy. His guests – Fei-Fei Li, professor of computer science and co-director of the Stanford Institute for Human-Centered AI (HAI); Susan Athey, professor and authority on the economics of technology; and Neale Mahoney, Trione Director of the Stanford Institute for Economic Policy Research – bring their distinct-but-complementary perspectives to a discussion on how artificial intelligence is reshaping our economy.

Athey emphasizes that both AI broadly and AI-based coding tools specifically are general-purpose technologies, like electricity or the personal computer, whose impact may be felt quickly in certain sectors but much more slowly in aggregate. She tells how solving one bottleneck to implementation often reveals others – whether in digitization, adoption costs, or the need to restructure work and organizations. Mahoney draws on economic history to say we are in a “veil of ignorance” moment with regard to societal impacts. We cannot know whose jobs will be disrupted, he says, but we can invest in safety nets now to ease the transition. Li cautions against assuming AI will replace people. Instead, she speaks of AI as a “horizontal technology” that could supercharge human creativity – but only if it is properly rooted in science, not science fiction.

Collectively, the panel calls on policymakers, educators, researchers, and entrepreneurs to steer AI toward what they call “human-centered goals” – protecting workers, growing opportunities, and supercharging education and medicine – to deliver broad and shared prosperity. It’s the future of the innovation economy on this episode of Stanford Engineering’s The Future of Everything podcast.

Listen on your favorite podcast platform:

Transcript

[00:00:00] Russ Altman: This is Stanford Engineering's The Future of Everything, and I'm your host, Russ Altman. Today we're taping the show live at the Glass House in New York City in front of a live audience. If you enjoy The Future of Everything, please hit follow in whatever app you're listening to, this will guarantee you that you never miss an episode. Today, Fei-Fei Li, Neale Mahoney, and Susan Athey will tell us that AI is enabling tremendous creativity in new products and services, but we need to focus on its impact on humans through appropriate policies, safety nets, in order to have the best outcomes. It's the future of the innovation economy. Also, we are continuing, uh, a new segment in the show called the Future in a Minute, I will ask my guests some rapid-fire questions and they will provide rapid fire answers. Uh, and we'll do this at the end, uh, after the end of our conversation. And before we get started, please remember to follow the show on whatever app you're listening to because you never want to miss the future of anything.

[00:01:09] Okay. We're getting started about innovation economy, uh, and it's useful to define what it means. Of course, in preparation for the show, I did what we would all do, I went to a large language model. I went to three, ChatGPT, Claude, and Perplexity, just 'cause I have those URLs. Perplexity was my favorite. It said, the innovation economy refers to an economic system where growth is primarily driven by the generation, application, and commercialization of new ideas, products, and technologies, rather than the more traditional heavy reliance on physical assets and manual labor. And then they go on to say that applications, examples might include ride sharing, house sharing, social media, and many others.

[00:01:51] Now, although the innovation economy is not entirely about artificial intelligence, or AI, AI is a very big part of it, and a part where our panel is expert. Questions that arise for AI in the innovation economy are, what will be the impact on jobs, on the quality of life, on the overall economy, and how do we navigate a future in a way that protects all humans, but enables unbelievable beneficial innovations. Today, my three guests will help us understand the opportunities and challenges. They are all professors at Stanford University. Fei-Fei Li is a professor of computer science. Neale Mahoney is a professor of economics, and Susan Athey is a professor of business. This is the innovation economy dream team.

[00:02:37] Susan, let's get started. You have written that AI is a general-purpose technology and so its impacts are likely to be broad and very difficult to predict. But you have looked in your work at other general-purpose technologies that have emerged in the past. What did we learn from when these things emerge? 

[00:02:56] Susan Athey: So, looking back from everything from electricity and industrialization to the personal computer, which a lot of us have lived through, um, they can have really profound impacts locally, but yet not show up in big changes in GDP growth for quite some time. And there are many reasons for this, but a big lesson from the past is that when you unlock one bottleneck you find others. And to fully make use of a technology you may need to restructure industries but certainly restructure production in order to fully optimize for it.

[00:03:39] Russ Altman: I take it that these bottlenecks can come up like in all different places and in all different aspects of the economy and the, and the business?

[00:03:47] Susan Athey: That's right. So, of course people, every, many economists talk about AI in general as a general-purpose technology. One that is less discussed, but I think profound is that software development is a general-purpose technology. In general, digitization is something that restructures industries and it helps small enterprises scale. It increases the span of control. It allows firms to grow without just having to directly supervise people. That kind of general-purpose technology though has not been fully adopted. It's across the world. Many small businesses are less digitized. It, and it's generally been something with a lot of scale economies, especially with data and machine learning.

[00:04:33] Um, and so the, the challenges have been that there's a lot of fixed costs to get going. If you're running a restaurant off of paper or maybe WhatsApp or WeChat, um, there's a, there's another leap. You have to be big enough to justify those investments. Um, and so one of the things you might ask today is, alright, what would happen if we remove the bottleneck that software engineering was? If you talk to a, a bank in Wisconsin or even, you know, in Brazil or Argentina, and they say, well, are we, are we doing enough with machine learning? And you would find out that, well, they have a lot of trouble hiring people. They have a lot of, of trouble getting be, being able to adopt.

[00:05:14] And I might recommend five, ten years ago that really, it, it, you may not get that much value quickly from making big investments because it takes so long to get going. Um, so there, there were many bottlenecks around just getting data organized in the past. One of the things that, um, that when you look out in companies, they're using clunky software, if they're, if they're even using it at all, because it's so hard to change that you, you change rarely and you put a lot of effort into it when you do. So, the bottlenecks have been around adoption, figuring out what you want, given that you're gonna be stuck with it, and just the big cost. So now we have to ask if software development is actually very cheap now, what stops us?

[00:05:57] If it makes it really, if, if we could already build today, say a nursing assistant, that could be very effective, especially in a country where nurses aren't well trained and there's not a lot of education, why do we think that it's not actually gonna be fully adopted tomorrow? And you can think of many reasons actually, that it won't be fully adopted tomorrow. You have to decide what to do. You have to get the nurses on board. You would have to train them. It, it is hard to adopt something new and so you wouldn't just pick the first thing up. And so, yet I don't predict that next year there will be nursing assistants around the world. 

[00:06:29] Russ Altman: Great, thank you. So, on this issue of, uh, bottlenecks and jobs, Neale, you have taken a long view in some of your work on the impact of AI on jobs. Um, how do you think about either predicting or measuring the impact of AI on the labor market in some of the examples that Susan just said?

[00:06:46] Neale Mahoney: Yeah, so it's a great question. Um, you know, I sort of joke that the two questions I get the most as an economist in Silicon Valley is, what jobs is AI gonna disrupt and what do we do about it? I tell 'em, I have no clue on what jobs are gonna be disrupted, but I know exactly what to do about it.

[00:07:04] Russ Altman: Boom.

[00:07:04] Neale Mahoney: Um, so boom. What, what should we do about it? Uh, I think we're facing sort of a veil of ignorance moment. So, you know, connecting to, uh, the humanities, I think many of you remember Rawls' thought, experiment of the veil of ignorance. If we can step back from our current lives and think about before we have our endowments or skills, uh, what do we want in society? What protections and safeguards do we want in society? And I think that's where we are with AI. That we know that some of us, uh, may lose a job or may have human capital, which is less valuable on the labor market. We don't know who it's gonna be. And I think what Rawls tells us is now is the right time to invest in a social safe net. I'll give you one example and then I'll shut up. Um, right, we live in a country where if you lose your job, you likely lose your health insurance. Uh, you know, maybe I'm wearing my political hat. I think that's crazy now. Uh, it is going to be hugely crazy when five, ten percent of us lose our primary occupation, maybe, because of AI. So, so thinking about how, oh, we're in a veil of ignorance moment and how we can invest in a social safety net to protect us against, I think the disruption we will face is a hugely important endeavor. 

[00:08:29] Russ Altman: Thank you. And I want to come back to safety nets a little bit later. Fei-Fei, you're a technologist who helped shepherd in the AI revolution. You're working currently in the area of visual intelligence and spatial intelligence far beyond what the current things like ChatGPT and all of their, and all the others, uh, go do. But these are also areas where living organisms, like on Earth, like humans have been excellent. We've evolved to be excellent at space. We've been evolved to be excellent at seeing things. How do you think these kind of systems you and others are building will interact with the humans who are using them? Or how do you hope that will happen? 

[00:09:04] Fei-Fei Li: Yeah, thanks for the question. So, uh, first of all, I do agree right now we're at the, uh, dawn of a AI economy, and I think whether it's the large language models or the next chapters, which are the spatial intelligence models, the, the embodied AI models, we're gonna see more and more innovation in, uh, in, uh, in this technology going beyond what we're seeing now. One thing that, um, also living in Silicon Valley, also getting the same questions that Neale does, that puzzles me is that when people talk about AI, they go straight to the word replacement, replacing humans. And this is just, we gotta be a little careful because AI there, AI will change jobs. AI will change different tasks within a complex job.

[00:09:57] For example, being a nurse in a nursing home. There are hundreds of daily tasks a nurse, uh, does, AI will help some, but will not, uh, replace the, the, uh, the job wholesale, uh, wholesaley. What is really important is to recognize instead of replacing, AI really augments. And that's what I really believe, is that this is a horizontal technology that can superpower humans, superpower our workflow, augment so many capabilities that humans are good at. We are, we might be short of labor or we might not be so good at particular part of that job, uh, uh, capability, but with AI, uh, it can help us. I wanna just take one example 'cause I'm in the area of, uh, visual spatial intelligence.

[00:10:50] We work with creators, you know, visual creators, storytellers, movie, uh, makers, filmmakers, game developers. All of these creators are facing this, uh, new era of AI tools. And it's so incredible for me to meet and work with them because they see AI as a tool to augment them. They see AI as a way to supercharge their creativity and their productivity. So, I'm not trying to say that there's not a double-edged sword. We should talk about that. But I do think it's so important. We recognize that this technology can do a lot more to supercharge, superpower people, putting people in the center instead of go straight to the word replacement. 

[00:11:41] Russ Altman: Thank you.

[00:11:41] Neale Mahoney: Can I, can I jump in on this?

[00:11:42] Russ Altman: Yes.

[00:11:46] Neale Mahoney: One, I agree. And two, I think a useful framework is, you know, if you think about the questions in innovation policy over the 20th century or early part of the 21st century, they're about how do we maximize innovation given budgetary constraints, human, um, capital constraints. Uh, but moving forward, I think the most interesting, richest questions will be less about just sort of how do we, uh, put our foot on the gas, but how do we shape innovation to be complimentary to our skills? And that's, that's a STEM question, but it's also a humanities question. Because it requires tapping into what are the activities that give us meaning and purpose. And so, you know, at Stanford campus where we have that intersection of the humanities and STEM, I think is a great place to be working on that hugely important question. 

[00:12:41] Fei-Fei Li: It's the human-centred AI question. 

[00:12:43] Neale Mahoney: Yeah. Somebody named their institute very well. 

[00:12:49] Susan Athey: And yeah, and I think we have, you know, Fei-Fei and I worked together, um, when, and Russ with, with, uh, the founding of the Stanford Institute for, for Human-Centered AI, and Jon Levin as well. Um, and one of our theses was exactly, you know, developing this idea, what is the role of a university in all of this. And of course, commercial interests are often purely profit, which is lowering cost. But a lot of these innovations are fixed cost investments. So, if you invent a nursing assistant, you know it can be applied and adapted in many places. And so, if we do some of the innovation in the university for human augmenting technology, then that takes care of some of the fixed costs, and then entrepreneurs can take it across the finish line and solve the last mile adoption problem.

[00:13:42] And that can happen not just to us building something and throwing it over the transom, but one of the great things about the language-based AI is that actually many people can participate in it, even if they don't have a big computer science background. Um, and so actually we can see the participation and the value add coming, you know, across the world and figuring out how to help it augment humans in those settings. And I just wanted to connect something that came across all of us, which is that, um, government policy also can play a role in this. And so, when people ask me like, what are all the people gonna do? Are they just gonna sit on the beach and have drones drop them daiquiris? 

[00:14:24] Neale Mahoney: I hope so. 

[00:14:26] Susan Athey: I mean, I have many questions about how we get to the beach and why the drones are bringing us daiquiris, but if we put that, if we, if we take a slightly nearer term view, um, there are many activities that scale with the size of populations and where we are sort of under-investing in them today. More childcare, more nursing, better doctors, better elder care. You know, all of those things are, are things that humans could be productively employed at, at large scale, and AI can help humans transition into those jobs. And governments can procure those things. Governments have a big role in investing in all of those sectors. So, like rather than imagining like just a bunch of people not doing anything, if we get ourselves organized, we create the products, we create the government policy, then in principle, we can help people through the transition while making all of us better off.

[00:15:18] Russ Altman: So, Susan, you spent two years in government, I happen to know, and I know that you are an advisor and a, and a research associate, uh, Neale, tell me with your real world understanding of the government today, how's that gonna go?

[00:15:33] Susan Athey: You know, it's, it's really hard because at this moment, right, like we've, we've had the last six years or so have we had some curve balls thrown at us. And it, it's not easy to figure out the very best thing to do in the face of big changes and big disruptions. Um, it's really hard. And it's, and so, you know, we need to, we need to work together on solving those problems. And so, you know, it is scary to think about our government getting less functional in the face of places where actually government leadership might be, for both research, for, for universities, could be more essential, um, than ever.

[00:16:15] Russ Altman: Thanks.

[00:16:15] Fei-Fei Li: I, I just wanna add something. You know, um, I firmly believe a hundred years from now when historians rewrite the chapters of 21st Century, especially the dawn of AI, a collective success of humanity or this country would be that this era of AI, uh, launched a revolution in education. Is that now that AI, even language models has proven that AI can do the standardized tests by and large to, to whatever passing or even excellent grades that we should rethink about spending ten, more than twelve years of human capital, that educating young humans to evaluate them to the level of what AI can do today. Human education, it should be completely rethought because AI showed us that it's not about, you know, memorization of, uh, knowledge and evaluation of these memorization. So, if there's anything government can do, to me, is the investment of K12 education as well as higher education, 'cause this is the moment that we can really revolutionize the most important thing on earth, which is human capital.

[00:17:42] Russ Altman: Neale.

[00:17:43] Neale Mahoney: Uh, I wanna connect these threads and run with it. Uh, there's a great fact from the economist, David Autor, uh, who's documented that, you know, over seventy years, something like seventy percent of the occupations we have in the economy sort of emerge. That is, if you look back seventy years ago, uh, seventy percent of jobs didn't exist. So, you know, if AI diffuses in a way, which I think is geographically spread out and not too fast that we will adapt. We hopefully will come up with new ways to educate people to be complimentary to that AI, AI will adapt in ways which are complimentary to humans.

[00:18:23] Uh, but history also teaches us that when things are concentrated and rapid, so if you think about the hollowing out of factory towns, uh, due to Chinese accession in the WTO and automation, those impacts can be, uh, devastating and we need policy to come in and provide a safety net. So, I don't know which timeline we're on, we're probably some combination of the two, but I think thinking about those extremes and sort of shaping the technology and education is sort of useful to triangulate in an uncertain.

[00:18:58] Russ Altman: Thanks. And, and now, Fei-Fei, I want you to put your hat on as somebody who has a startup and you're trying to make a go of it. And then we were talking about safety nets, we're talking about governmental policies. I'm sure you have part of yourself and your, and your, and your cohort that gets worried that there will be premature regulation, premature policies, that actually take away your ability to do the things you wanna do. What, how does that conversation go and, and how do you think about it? 

[00:19:24] Fei-Fei Li: Uh, great question. First of all, I'm still partially, uh, involved in Stanford, so I'm not a hundred percent on leave, just to make it very clear. And, uh, it's also, um, one of the most fascinating conversations that has been happening in the state of California as well as, uh, uh, federally about the, the tension between AI regulation and uh, and AI, um, innovation. And, uh, you know, I wanna start with, I'm a parent and when your child is about the age of, I don't know, six or seven, one of the most important lessons you need to teach them is to turn on the stove and cook an egg. And that's to, you know, use fire. And, uh, it's a pretty dangerous thing, right?

[00:20:12] But you all have to bite the bullet and teach your kid to, um, use fire. And then there are many other things we have to teach our kids. Uh, the reason I'm using this is that, uh, this example is technology is always a double-edged sword. That's since the dawn of human civilization, we in our DNA, we're compelled to innovate so we can live and work better, but we also use that to sometimes inadvertently hurt ourselves or sometimes purpose, intentionally hurt each other. No matter what that tension between the drive for innovation and the need for establishing norms and, uh, and guardrails is always gonna be there. So as an entrepreneur, as an innovator, I think it's very important that we arrive at a healthy balance.

[00:21:06] And, uh, with Stanford HAI, we have been actually advocating a policy framework for AI, which is very simple, is first science, no science fiction. You know, to do good regulatory or government policy, we should use data, use measurement like what Susan has been doing, uh, in government, um, to, to guide our regulatory framework instead of those farfetched science fiction extinction doom say. Second is that be pragmatic, not ideological, right? For example, um, in AI, we have so many, uh, regulatory frameworks. You're involved with the healthcare, FDA, and, and we should just maximize the, the, the interaction and partnership with these pragmatic frameworks instead of going ideological. Last but not the least, um, in this audience, I always believed invest in our public sector, invest in our innovation engine. Like Condi said, there's no plan B, uh, government, uh, policy in AI should include the investment of our country's, uh, innovative engines, including the, uh, universities and, uh, public sector. 

[00:22:22] Russ Altman: Thank you. And, and I, I actually want to go to a slightly different topic, but Susan, uh, you, you were in the Justice Department for a couple years, and in all the examples of innovation economy successes that I gave in my intro, social media, ride sharing, home sharing, the winners have tended to be either monopolies or near monopolies. Um, is this a feature of the innovation economy? Uh, uh or how should we, and are we okay with it, or is it not a necessary feature? 

[00:22:49] Susan Athey: Well, I think that we've, we have often concentration because of the scale economies, but I do think we have examples where, um, even a little bit of competition is a lot better than none. And we, we have seen a lot of concerns when one firm can put a tax on the whole economy. I mean, and probably there's some people here in the credit card industry, but that's like sort of an easy example because you know, there's a few basis points coming off of, you know, every transaction, right?

[00:23:21] Russ Altman: I've heard it referred to as a toll booth.

[00:23:24] Susan Athey: Yes. Um, and so in the end, you know, when those tolls get too high, that is also bad for innovation. So, I think you can, you can think about, you know, we need, um, firms to be able to get returns from their innovation in order to want to do the innovation, but then if one firm stops, slows down all the innovation around it, that can be problematic as well. So, I think here there's a, I have a few comments about competition in AI. One is that there was a bit of a push to hold back open models, um, open-source models, but those things can, can, although people were scared of them. They can pull down the prices for everybody, and that means that every single business in the country that is using large language models can get them at lower price if there's a free alternative that's pretty good. And so that can be very impactful. That conversation changed, once DeepSee came out, but still, we, we also need to worry about, um, other kinds of market power.

[00:24:22] There are lots of potential for bottleneck. When I think especially about smaller countries, the countries that aren't going to themselves be generating the profit from the AI stack, there's a huge risk if those countries are buying AI at sort of a high price and then automating a lot of their labor, they might have real wages fall because if they're paying a lot for the AI services, their goods prices don't fall. So, stuff people buy stays expensive while wages fall. So that's gonna be a really bad situation for countries and economies. And so, if this is something that like every small and big business is going to buy, like the price of that thing is very important. The quality's important, the price is important too. So, at the moment it seems like we're doing reasonably well at that, but keeping our, our eye on the ball to make sure that this technology is actually gonna allow everyone to innovate on top of it will be crucial.

[00:25:22] Russ Altman: Uh, thanks. And, and Neale, on that note, you've been looking at the innovation economy, uh, and economy more general, and you've expressed wonder about whether America and Americans are in something of an economic funk. And I can't help but think of that after, after Susan's comments. What do you mean by this economic funk? And do you see a way out?

[00:25:42] Neale Mahoney: Um, so if you look at data, uh, there was a piece in the Wall Street Journal, it was now, nine days ago, that documented belief in the American dream had gone from seventy percent of the population to twenty-five percent of the population over a generation. Uh, that optimism about our economy has it, it cratered after COVID, and it hasn't recovered. Uh, what's going on, I think we're still figuring it out. Uh, uh, probably some combination of, of three things. Uh, one is there are, I think, real risks in the economy, uh, tariffs of people concerned about, uh, uptick in unemployment. Uh, two, I, it has to be true that, that social media is skewing our vision of, uh, what is a good and meaningful life? Uh, I, I quip that like on Instagram you see more selfies from the Ritz than the Motel Six, and that gives people, uh, an unbalanced view of what is normal and what is successful. Uh, but I think there's a lot we don't know. Uh, but look, these, uh, these conversations, I think political leaders, innovators, uh, I think are, are all like really important in helping this country get its mojo back.

[00:27:05] Russ Altman: Thank you. Well, I think we're gonna have to leave it there. I know we want more, but we only have twenty-six minutes. Uh, this discussion has been fantastic. But before we finish up, as promised, I want our move to our new segment called the Future in a Minute, I will ask you some quick questions and I'm praying that you will give me some quick answers. So, we're gonna, we're gonna start with Fei-Fei, and the first question is, I'm gonna do both questions for each of you. Uh, what is one thing that gives you the most hope about the future? 

[00:27:33] Fei-Fei Li: Well, it's unequivalently humanity. There's nothing artificial about artificial intelligence. And I wanna paraphrase Dr. King, that the arc of history is long, but it bends towards benevolence. And I believe the hope of AI is in humanity.

[00:27:55] Russ Altman: If you were starting over again and you needed to get your degree or training in some other discipline, what would it be? 

[00:28:01] Fei-Fei Li: Okay, I gotta use thirty seconds. Six weeks ago, a Stanford sophomore interviewed me and as usual as a professor, I'm like, what is your major? And the student said, no, none of these majors are good at Stanford. I'm gonna have my own major. So, I'll create my own major, which you are allowed to do. And I asked him, what are you gonna create? He said, I'm just gonna use AI to maximize money making for me. I was like, damn, why didn't I have that idea? So, okay. My real answer would be if I were to start again, I would do a combination of physics, computer science, and art.

[00:28:46] Russ Altman: Thank you. Neale? 

[00:28:48] Fei-Fei Li: My own major though. 

[00:28:49] Russ Altman: Neale, what is one thing that gives you the most hope about the future? 

[00:28:52] Neale Mahoney: Uh, I was gonna say students, incredible students. Uh, but this morning I was chasing around my kids, uh, eleven- and seven-year-old at the American Museum of Natural History and seeing dozens, hundreds of little kids excited about STEM, about history, about dinosaurs. Uh, we face headwinds as innovators, but if we reflect on, on the fact that inside we're seven-year-old boys and girls that like dinosaurs, we're gonna be all right, so.

[00:29:23] Russ Altman: If you were starting over again and you needed to get your degree or training in a different discipline, what would it be?

[00:29:28] Neale Mahoney: Uh, robotics. I care about the, uh, overlap between STEM and science and the real world. That's why I study economics, but I think robotics is another great field for that intersection. 

[00:29:41] Fei-Fei Li: Neale, you can come to my lab. 

[00:29:42] Neale Mahoney: I would love to. 

[00:29:46] Russ Altman: Susan, what is one thing that gives you the most hope about the future? 

[00:29:49] Susan Athey: So, I, I do think that AI is more accessible and has more potential to be helpful across countries and across the income distribution than the previous rounds of technology, like machine learning. I think that it can allow small businesses, people who don't code, who don't build Excel spreadsheets, and don't buy enterprise software to run a business through a chat application and natural language, and to grow and scale and get more efficient, and that that can help people rise up, um, out of poverty and, and it can help poor countries grow.

[00:30:23] Russ Altman: If you were starting over again and you needed to get your degree or training in a different discipline, what would it be? 

[00:30:28] Susan Athey: So, I've had to do this twice already. Um, I trained myself in machine learning and AI technical work, and then I also had to learn law. But, um, if I, one more thing, if I wanted to add it to the mix, I think going forward anybody's gonna be able to build a great product. If you have a good idea, you're gonna be able to make it reality in the digital space. So, product management. 

[00:30:49] Russ Altman: Thanks to Susan Athey, Neale Mahoney and Fei-Fei Li. That was the future of the innovation economy. Thank you for tuning into this episode. Thanks to our live audience for supporting us and the show today. With nearly 300 episodes in our back archive catalog, you have instant access to hours, if not days, of interesting discussions on The Future of Everything. If you're enjoying the show or if it's helped you in any way, not the highest bar, please consider rating and reviewing it. We love to get a 5.0, but only if we deserve it. You can connect with me on many social media, uh, apps, including LinkedIn, Threads, Bluesky and Mastodon @RBAltman or @RussBAltman where I share about every episode. You can also follow Stanford Engineering on social media @StanfordSchoolOfEngineering, or my favorite, @StanfordENG. Cut.

Related Departments