Skip to main content Skip to secondary navigation
Main content start

The future of AI and democracy

A legal and policy expert looks at the future of artificial intelligence and contemplates its long-term implications for democracy.
A digital checkmark over binary code.
How does AI erode trust in the truth? | iStock/PashaIgnatov

Two-time guest Nate Persily is a professor of law and policy who studies the intersection of artificial intelligence and democracy. 

AI is creeping into democracy, he says, and 2024 saw its share of deepfakes and synthetic media, but with surprisingly little impact. His bigger concern is the opposite – politicians claiming the truth to be fake. It breeds distrust and, for democracy, that’s more pernicious, Persily tells host Russ Altman on this episode of Stanford Engineering’s The Future of Everything podcast.

Embed Code

Listen on your favorite podcast platform:

Transcript

[00:00:00] Nate Persily: AI accentuates the abilities of all good and bad actors in the system to achieve all the same goals they've always had. And that's true for democracy and elections, just as it is for the economy or other aspects of social interaction. And so, in sort of every little nook and cranny of our democracy, AI is going to peak its head.

[00:00:17] Russ Altman: This is Stanford Engineering's The Future of Everything, and I'm your host, Russ Altman. If you're enjoying the show, or if it's helped you in any way, please consider sharing it with friends, colleagues, and family. Personal recommendations are the best way to spread word about the podcast. 

[00:00:37] Today, Nate Persily from Stanford University will tell us that AI has implications for democracy itself because of its effect on communications, deepfakes in elections and trust in the truth. It's the future of AI and democracy. 

[00:00:55] Before we get started, a reminder to please tell your friends, family, and colleagues about the show. It's one of the best ways to help us grow the podcast.

[00:01:09] When you think about AI, you might not be thinking about its impact on democracy. But you know, AI is starting to infiltrate itself, good or bad, in all aspects of life. In healthcare, in finance, in government, in communications, in journalism, and in the law. And so it shouldn't be a surprise that the uses of AI, and even the very existence of AI, has implications for our democratic processes. For example, deepfakes. People are worried that we're going to see in future elections lots of fake, synthesized media with people saying things that they didn't really say or believe or people doing things that they really didn't do or contemplate doing, and that this will affect the electorate. 

[00:01:52] Indeed, in the global elections across many countries in the last couple of years, there's been many, many examples of deepfakes. The good news, we're going to learn, is that they don't seem to have radically affected most elections. People can figure out what they're looking at for the most part. But it's, there's something more insidious, which is that this may make it harder to know when you're dealing with the truth, when you're dealing with falsehoods. And more importantly, it allows people with bad motivations to say, no, no, no, no, I didn't say that, that's a deepfake. That's AI. I didn't do that. That's AI. This is a way for liars to reap benefits from the uncertainty posed by the existence of AI itself. 

[00:02:37] Well, Nate Persily is a professor of law at Stanford University and an expert at elections, democracy, and the interactions of those with technologies like AI and social media.

[00:02:49] Nate, recently you've been focusing on AI and democracy. So I think my first question is, what does AI have to do with democracy? 

[00:02:56] Nate Persily: Well, AI is relevant to all social phenomena, and so democracy can't escape its impact. And so, uh, you know, AI accentuates the abilities of all good and bad actors in the system to achieve all the same goals they've always had. And that's true for democracy and elections, just as it is for the economy or other aspects of social interaction. And so in sort of every little nook and cranny of our democracy, AI is going to peak its head. 

[00:03:22] Russ Altman: Gotcha. Okay, so, one of the things that everybody was worried about, um, we just completed some elections, was deepfakes. And that deepfakes would lead people to believe that certain people said something or didn't say something. Uh, what actually happened at the last election? Were deepfakes a big factor? 

[00:03:38] Nate Persily: So there were millions of examples of synthetic media or deepfakes. That's true in the US and around the world. Let's remember that this was not just the US election year, but it was kind of the world cup of elections, where we had over four billion people who were voting. And so we saw lots and lots of deepfakes and synthetic media, but we didn't see many that had any impact. And there's sort of a lesson here, which is that, you know, if a deepfake occurs in the forest and there's no one there to view it, there isn't really much of an impact.

[00:04:05] And so, uh, you know, it wasn't that people were deceived by, uh, say synthetic imagery, uh, in order to, you know, believe something that would shift their vote. The phenomenon we saw that was more ubiquitous is politicians disclaiming true stuff as being fake. And that is in some ways, I think the deeper problem with synthetic media. It's not that people end up being persuaded by false stuff. It's that they, it ends up infecting sort of their belief in true stuff, and that from a democracy standpoint is more pernicious. 

[00:04:37] Russ Altman: Yeah. So I've read you, you've written about this recently and it's fascinating stuff and one of my first impulses, and you know, I'm an optimist and I have to sometimes keep this under control, was, well, is there a positive side here that people are now naturally skeptical about everything and skepticism is not always a bad thing.

[00:04:56] Again, however, when you're skeptical, you need to be able to act to kind of clarify your skepticism. So what, is there a good dynamic underneath here about, um, nobody trusts anything, and therefore people are taking a pause and perhaps thinking more deeply about what they really should and shouldn't believe based on what they can and can't see? Or is that just Russ way off the edge of optimism. 

[00:05:18] Nate Persily: Well, I'm worried about deep seated sect skepticism. I think that most information that we confront in our daily lives is not false. And so the more that we end up being skeptical of that information, the harder it is to make decisions, political and otherwise. And so, um, it's not just that we become skeptical about sort of politically relevant stuff or stuff related to campaigns. It means that, you know, it facilitates lying. If people are naturally skeptical of everything that's happening out there. And more importantly, it means that we retreat into our own separate echo chambers of trusted sources. And so we don't have a shared sense of authority among Democrats or Republicans or among different groups. And I think that is a challenge for democracy. 

[00:06:06] Russ Altman: Gotcha. Okay, good. So, and I buy that I, the skepticism can be a pernicious and actually it starts to eat away at things that were much more fundamentally kind of, uh, evaluatable as truth and false.

[00:06:20] So I'm amazed we've been in this conversation for almost five minutes and I haven't said social media, but I've now said it. This, your thinking is intricately related to social media. And you think about both parallels and intersections between AI and social media. Uh, and then their relate, their mutual relationship to democracy. So help me unpack the social media aspect of this. 

[00:06:43] Nate Persily: So AI sort of turbocharges social media and all the benefits and pathologies of it. So, um, you know, AI is a tool and is a tool that can be used, as I said before, for good or ill. And so whether it's, uh, making more sophisticated bots or trying to propagate fake imagery or trying to, or trying to combat that.

[00:07:04] So the AI tools that the platforms have are absolutely important in trying to take down bad actors. So AI is infused in our, in all of our social media experiences. One thing I will add from a kind of democracy perspective is that one of the challenges that the Internet in general and social media in particular, I think, pose is that because we're interacting with computers, um, it facilitates anonymous communication so that, you know, in ways that, uh, you know, we, sort of, anonymous communicators, anonymous speakers never had the megaphone that they had in the pre Internet world. Now, if you take my First Amendment class here at Stanford, you learn that anonymous speech is actually constitutionally protected. The Federalist Papers were written by, uh, Publius, after all. 

[00:07:47] Russ Altman: Yes, yes. 

[00:07:47] Nate Persily: You've seen the play. Uh, you know, and so, uh, anonymous speech is protected. But one of the things that the Internet does is that it's becoming more and more difficult for us to figure out whether we're talking with a person or we're talking with a machine. And the rise of AI facilitates that kind of disjuncture between, you know, that might be critical for deliberative democracy, for us to essentially weigh the, uh, and evaluate truth and people's content based on who they are and what they say, what they believe. Uh, whereas now you could have bots and, uh, other sort of forms of AI, that AI agents that are propagating a lot of the information in the social media universe. In fact, this is something that is now on offer by Facebook. Facebook is, has been saying that what they're going to do is allow you essentially, they're going to be putting AI content into your feeds, uh, to see if you like it. 

[00:08:40] Russ Altman: Okay. So, um, yeah, so now that you've established very clearly that the, uh, the social media is the perfect, uh, platform to deliver AI, uh, information and I put that in quotes. That leads to the next issue that I know you've been thinking about, which is the concentration of, uh, of, or the potential concentration of power, uh, and an AI capabilities in a few big tech companies. And what this means for the, uh, the world and for the United States in particular, when it comes to, uh, the power that might be concentrated in a few platforms. So can you elaborate what are the issues there and what should we do about this? 

[00:09:22] Nate Persily: So, there are a lot of similarities between the social media universe and the AI universe and some of the same players like Facebook and Google are in both, or Meta and Google. Um, but I think there are some critical differences as well and so I want to highlight those. One of the challenges with, say, social media monopolies, is that these are speech marketplaces. And so the power that a lot of these companies may have, uh, is over the, you know, speech and interaction that's politically relevant or otherwise. And so, um, we would worry if, you know, one company monopolized the social media universe because so much, their speech rules then could govern what people see and hear.

[00:09:59] Now, what's interesting over the last five plus years is that the social media universe has actually become more fractured, lots of new platforms. And so there's reason to think that, you know, some of the concerns that might have been there earlier are not as pronounced now. Um, so then the question is, well, what about the AI companies? Is this, is sort of AI, similar to search where we're going to have one Google effectively that is, um, going to be dominating the marketplace. They certainly are behaving like it is because they're building more and more powerful models on the thoughts that whoever achieves like AGI or some powerful model will be the one that wins.

[00:10:34] Russ Altman: And if I may just interrupt you, many of the search engines are now offering you at the top of the page when you do a search, yes, they'll give you the websites that were hit. But they'll also say, would you like me to give you an AI summary of what I just found? And so you can tell that this is becoming part of their business model.

[00:10:50] Nate Persily: That's right. And so, yeah, so the whole idea of searches is changing. Uh, and, but will ChatGPT beat Claude and beat the others, you know, and will there be one chat bot? Is that the future of, um, of AI? And there's reason to think that the kind of ecosystem with AI is a little bit different than social media, principally because of the rise of open models. So, um, Facebook's Meta's Llama model, right, is in some ways the most sort of advanced, but there are many others around the world. And in fact, there are variants on, there's over sixty thousand open model variations that are now on Hugging Face that people can have access to. 

[00:11:31] And that's important because an open model is one, and we sort of, these are, it's not, some people call it open source. I don't want to get into the, to the nitty gritty on that. But the point is these are models with open weights that you can have access to, and that you as a user can fine tune to your own purposes, and that allows for greater competition. Uh, and in some ways addresses digital divide problems in places around the world where they don't have the billions of dollars that you need in order to build these giant models, like OpenAI's model or Anthropic's model or Google's. 

[00:12:03] Russ Altman: Yeah. So how did the, does the rise, and I am very aware of the open models, we use them in our research. And as a side note, when I speak to my colleagues who come from non-English speaking countries, they say that the quality of the large language model output precipitously, uh, lower than what we're getting in English. And just in terms of the fluidity of the answers and even the sensibility of the answers. So this idea of the digital divide is very real. And I know that those, uh, researchers and others, uh, at other like less well served countries are very excited about the idea of taking these models and tuning them for their own purposes. There's also cultural things embedded in these models that they don't like, like, that's not part of our culture.

[00:12:45] Uh, and so that gives a lot of hope for, um, for modifying and having appropriate local models. But then there's this downside risk and people are especially worried about open models in the hands of bad actors could create a lot of problems. And so what's your take on that? 

[00:13:02] Nate Persily: That is true and a genuine fear. So in the first year or so of the ChatGPT revolution, perhaps the most identifiable social harm from AI was the explosion in child pornography that came as a result of open image models. And we will never live in a world without an infinite amount of virtual child pornography. And that is a direct result of the fact that these models were widely available. Now, you know, we can, legitimate arguments can be had about whether, you know, it, does that cost, is it exceeded by other kinds of benefits of openness? 

[00:13:39] Uh, but you're right, which is that, if these models are extremely powerful, they can get in the hands of, you know, adversarial governments that can, uh, terrorist actors, criminals, all that kind of stuff. And we are going in some ways, this is the biggest question I think, from the standpoint of AI regulation and policy, which is, are we going to join arms and jump over the cliff related to open models? Uh, because to talk about this as if it's like the social media universe, we've got a few actors and you're just going to regulate them is, I think mistaken. You have to just decide, well, are we going to have a more competitive open ecosystem with the possible downsides that seem to me to be almost baked into the idea of openness. 

[00:14:20] Russ Altman: Now the open models, uh, just if we can get a little technical for a minute. They come at a cost, which is that it's still incredibly expensive to tune them. Like you were talking about tuning, and my lab works in this area. And there are open models that we can't touch because we don't have enough computers at an academic lab. I am at Stanford University. You might think we would have the resources, period. We do not. Um, so we have to take smaller models. We have to do a lot of tricks to get them to do what we want them to do. 

[00:14:49] So there is still this issue of access to compute, um, in addition to access to the models. And, uh, that, that seems to be practically a big barrier to this open competitive world that is pretty attractive the way you've just described it. Um, What about that? What, and I know, it's like, it seems very mundane. Like we're talking about computers and that Russ can't get the computers he wants. But this is, I'm sure there are countries that can't get the computers that they want. 

[00:15:16] Nate Persily: Well, that's true, but you're going to end up having other kinds of open models that will be smaller and less compute intensive and that the sort of energy and other costs for inference is going to be lower. And so I think there's going to be a diversity of models that are out there. And so you're right that, you know, if you want to develop your, your own, um, sophisticated open model off of Llama, it might be more compute intensive. But there's so much innovation that's happening in that space that, I can't remember what Meta has said that Llama has been downloaded like a billion times or something and it's some huge number. 

[00:15:51] Russ Altman: Yeah, yeah.

[00:15:51] Nate Persily: And so we should expect all kinds of models, some which are, you know, computationally intensive, but a lot, as we've seen recently, uh, that are less expensive. 

[00:16:00] Russ Altman: Yeah, that's exactly the path we've taken is we, uh, we're interested in doing research on drugs and we don't need a large language model that can tell me about the history of America or who won the Oscars or who won the Superbowl. So it turns out you can do this thing called distillation to make a much smaller model that's good at one thing. And that's actually very attractive for a number of reasons, because it means your model won't try to, like a human, it won't try to be an expert at something it's not an expert at. And we all know people who try to do that. And it would be nice not to have AIs that do that. 

[00:16:31] This is The Future of Everything with Russ Altman more with Nate Persily next.

[00:16:50] Welcome back to The Future of Everything. I'm Russ Altman, and I'm speaking with Nate Persily of Stanford University. 

[00:16:55] In the last segment, we talked about some of the basics about how AI is impacting democratic processes and how there are worrisome abilities of AI to interfere with our understanding of the truth and our communications between and among each other.

[00:17:10] In this segment, we're going to talk about regulation. How the heck can you regulate AI? Why would you want to, and what is the status of regulatory efforts across the world? 

[00:17:20] I want to talk about regulation, Nate. People have talked about regulating AI, but a lot of people haven't seen any meat on those bones. So what is this, I know you've thought about it, you've written about it. What would regulation of AI even look like and what are the chances of actually getting it to happen? 

[00:17:37] Nate Persily: So there is some meat on the bones, uh, just right now it's European meat. Uh, so the Europeans have been, uh, the most active in, uh, pursuing this. And so the European AI Act, as is true with tech regulation generally, is the first mover that may have a pretty important impact on the entire ecosystem. Now, one of the interesting things about the European AI Act, and this is a lesson I think for other regulators, right, is that when it was being developed, uh, and they spent years working on it, it did not include generative AI. 

[00:18:07] Russ Altman: Right.

[00:18:08] Nate Persily: Uh, and so they were about to pass this law and then suddenly the ChatGPT, uh, revolution happens and they sort of have to go back to the drawing board in order to rewrite the law to deal with these large language models and generative AI. Originally was about things like, uh, you know, facial recognition, discrimination, some other things like that. But, and so there, I think there's a lesson there about how difficult it is to regulate fast moving technologies in general, but AI in particular, because of sort of how ubiquitous it is going to be, you know, in our lives and all that. 

[00:18:41] Russ Altman: Yes, yes. 

[00:18:42] Nate Persily: And so there's, there are a lot of features to the AI Act and we in the Cyber Policy Center have just published an enormous volume called Regulating Under Uncertainty, uh, which is all about this. It was a four hundred and fifty page canvas of all the regulations out there. And the long and the short of it is that they make a decision about sort of regulating certain sectors like, the application of AI and cars and in jobs and in criminal justice and some of them are high risk applications where different types of rules apply. 

[00:19:11] Some of them are seen as low risk where disclosure would apply. And then regulating the technology itself. Certain rules for these large language model developers or what they call general purpose AI tools. And so, uh, I think everybody has sort of been scrambling to figure out like how do you identify what are the risky innovations and how do you do it in a timely way. 

[00:19:32] Russ Altman: Yeah.

[00:19:32] Nate Persily: Before they get deployed and then the sort of horses out of the barn. And so we had some, you know, attempts here in California that then were vetoed by the governor. Uh, there's going to be, I think, another round of attempted legislation there. But around the world, you're seeing, um, uh, the development of these AI safety institutes to try to, uh, come up with norms about the development of these new AI models. 

[00:19:54] Russ Altman: So let me ask about that. And let me go right to a point that I think is right underneath the surface here, which is there's also an element of competition. I think a lot of us believe, or a lot of people believe, that there's a, there's an AI competition, not only between the companies, but between nations. And that there's going to be an AI upper hand that somebody may have, and therefore others might have the lower hand. 

[00:20:15] And in fact, in particular, there's concern about China's capabilities. Um, and I know that some of the dynamics about the worries about the European actions is that they're taking themselves out of this competitive game. I don't know if that's fair, but you may have looked at this, and how does competition interact? Well, like, and the idea that, well, in order to win, we have to let all things happen, because then we'll figure out what the good stuff is, and then, um, that, that's the way we make progress. And if we put gates and guidelines and rules, we're going to shoot ourselves in the foot with respect to competition. So how do you see that playing out? 

[00:20:51] Nate Persily: I think that is a fair criticism, both of Europe and innovations in the West to regulate. Um, at the same time, there, there are potential dangers with this technology. So to sit back and just let the technologists have free reign, right, is inviting real risk. 

[00:21:07] Russ Altman: Yeah.

[00:21:07] Nate Persily: And so that's why I emphasize what's happened with some of these open models at the front end, which is that, all right, you know, they, we now live in this world, uh, where synthetic imagery is going to be a permanent part of the landscape. 

[00:21:18] And so there are sort of macro regulations and micro regulations. So there are things like, for example, regulating the use of AI and political advertisements which seems like low hanging fruit. Other kinds of disclosure regulations to prevent people from being deceived through say voice, uh, cloning technology. And so all of those kinds of regulations, I think are sensible.

[00:21:41] In addition, regulation is inevitable. And I think people need to understand that, which is that there are certain questions that simply need to be answered by law. So for example, the copyright questions with these AI models. To what extent can you train these large language models on copyrighted data.

[00:21:57] Russ Altman: Right. 

[00:21:57] Nate Persily: Whether it's Congress or the courts that are going to be answering that question, uh, you're going to have law that, uh, that applies there. Similarly with defamation, what happens with when, you know, ChatGPT says Nate personally robbed the bank and that was false. Well, we need somebody to come up with the rules on that. So we need rules of the road. The question is what do we need sort of more than that in order, that might retard AI development. And I think part of the critical question here is whether governments even have the capacity to regulate AI. Um, you do not have inside government the level of expertise that's necessary on the enforcement side to implement any of the most aggressive and innovative sort of regulations that you might want. 

[00:22:38] And so I think that the future is one in which we sort of regulate AI companies in the same way that we regulate the financial industry. Where you have sort of outside private auditors who are paid for by the firms that, but are regulated to try to prevent conflicts of interest. We know those are always problems. Uh, so that then you have some third party that ensures that the AI companies are not checking their own homework when they make promises about how these, um, how their models are going to behave.

[00:23:06] But I think the command and control model, which is kind of the European model, is so dependent on a level of expertise that does not exist in government today, that it's going to be very difficult for them to pull off. And you are right about the AI race, which is that, look, if it is a competition for the model, right? The more powerful model, if you have an adversary, let alone an authoritarian government that ends up being the one that wins the AI race, that is bad for democracies as well. 

[00:23:35] Russ Altman: Right. And that's easy to understand. Oh, I just wanted to ask quickly about the global South. Uh, they are often left behind technologically and yet are a huge population of the Earth. And they're beginning to organize and have a voice in all of this. Do they get on your radar at all when you're thinking about regulatory approaches and like the growth of AI globally?

[00:23:56] Nate Persily: They do. And I think that, um, there are sort of several important actors here. Obviously, India, whenever you talk about technology is going to be, I think, a pretty important player in the AI space. Um, there are concerns that you see in about the digital divide and whether they're going to be left out. That's one of the reasons. But one of the selling points, as we discussed, about these open models, is that it does attempt to correct some of that. 

[00:24:19] What was fascinating to me is I did sort of some traveling last year, uh, in my role as director of the Cyber Policy Center at Stanford, is like, I remember going to Japan, which admittedly is not the global South, but since we were talking about the West. Um, and meeting with the digital minister there, and I was talking about AI risks, and he sort of stopped me midway, and he says, just tell me what's the killer app, right? What is it, what is it that AI is going to be able to do for us? And they were thinking about a totally different way. Same with the South Koreans, less on the risk side, but like, how can it solve certain problems? 

[00:24:50] What was fascinating to me in Japan and in Korea is that people were thinking about this in the connection with the low birth rate, right? They're like, we don't have enough people. So having robots, having AI, um, make our population more productive is for them a high priority. 

[00:25:05] Russ Altman: Uh, yes. And then of course, I think the answer is healthcare and, you know, this is where I work. And I think in AI, the upsides on healthcare look really good. What, okay. So you're a law professor, in the last couple of minutes, what is a, what impact is AI having on the practice and the teaching of law? Is this revolutionizing? Can I fire my lawyer? And I'll just develop my paperwork by having a ChatGPT develop my contracts. Or, uh, is that, uh, is the report of that happening a little premature?

[00:25:33] Nate Persily: Well, I guess I have to say in my role as a law, lawyer, I have to say, do so at your own risk, right? You can do that, but it's, uh, it's risky. And most famously about two years ago, we saw a lawyer that used a ChatGPT in order to write a brief. And then was surprised that it made up certain cases and then he was disciplined by the courts. And then the Chief Justice, uh, John Roberts has also already, um, uh, issued guidance about, you know, the sparing use and cautionary use of AI because of the likelihood of hallucinations. My colleague here at the law school at Stanford, Dan Ho has, uh, a series of papers sort of taking down the use of AI in legal research because of the risks of it hallucinating and coming up with false contentions.

[00:26:18] So we are not there yet, but nevertheless, as was true with the types of tools you were talking about before. AI will be useful to do certain tasks that lawyers do. I mean, it's going to, as is true with any writing, um, doing first drafts of briefs and doing, um, other kinds of legal research that, where you have much more supervision, uh, you're going to see associates that are going to be using that.

[00:26:43] But where the real punch I think will come is in mass adjudication. So there are backlogs in the US and elsewhere of, you know, say like social security benefits, veteran benefits, other kinds of, uh, huge backlogs of cases. Where an AI opinion on so many of those, uh, those types of conflicts and claims will be very important, uh, to work through a lot of that backlog. Now, that doesn't mean you take humans out of the loop. You always have to have the opportunity for appeal to a human to validate whether the decision was right or not. But we're in the worst of all worlds right now, which is that, yeah, you have human review of these decisions, but no one's actually doing the review. And so they're just sitting there, uh, without the claims being resolved. 

[00:27:25] Russ Altman: Yeah, and I do know that humans are better at reacting to, uh, advice than, uh, coming up with it. So even a system that was only okay, it would focus the attention of the human decider to say, okay, the AI thought this. I get the idea. Let me see if I believe that based on a perhaps more cursory review of the key points in the document. Um, are law students, uh, getting this message from you and not using ChatGPT in their legal writing? 

[00:27:54] Nate Persily: Well, I tell them they can actually use it on their papers with me. Because I think we as professors just have to recognize that they're going to use it anyway. And so better that they use it responsibly. The rule in my classes where I write, uh, assigned papers is that you can use AI, but if there's one hallucination, you fail. And so it's sort of the nuclear option, uh, to discipline them.

[00:28:16] Russ Altman: Wow. There you go. 

[00:28:18] Nate Persily: And I, you know, I, so far so good. I've seen some studies that show that, um, AI does better or worse in certain subjects. And the one area where it's really, really bad is election law. And so, uh, when I teach my election law class, I'm like, have I put the exam through ChatGPT, and so far it hasn't written an A exam. 

[00:28:39] Russ Altman: Well, that's good, it's good to know that you're at least right now, your profession and your specialty is not at risk. Although the too, if you write too much, they'll be able to train it to be much better and we'll have a little AI Nate. 

[00:28:51] Nate Persily: That's right. Well, I look forward to that, uh, you know, cut down on my workload as well. 

[00:28:55] Russ Altman: Thanks to Nate Persily. That was the future of AI and democracy. 

[00:29:00] Thanks for tuning into this episode. You know, we have more than 250 episodes in our archives. And so you can listen to a wide variety of conversations about the future of anything. If you're enjoying the show, please rate and review it. We like to get five point oh. Do what you think is best. It'll help us spread the word and it'll help people find out about the show who might be interested. You can find me on a lot of social media, like Bluesky, Mastodon, Threads, @RBAltman or @RussBAltman, and you can also find me on LinkedIn, Russ Altman, where I announce all of the new episodes, and you can also follow Stanford Engineering @StanfordENG.