Skip to content Skip to navigation

Research & Ideas

Search this site

Jeremy Weinstein: Technology in the public interest

In a computer science classroom, an interdisciplinary team of professors is teaching budding technologists to consider the societal impacts of their work.

Illustration of a compass on a computer screen

How do we prepare engineers, data scientists, computer scientists, etc. to think critically about what they’re making? | Illustration by Kevin Craft

Political scientist Jeremy Weinstein has worked at both the White House and the United Nations.

In both jobs, he encountered the ethical and policy concerns that new technologies can present to policymakers. As one example, he points to the fierce debate between Apple and national security experts over end-to-end encryption and the challenges investigators faced in accessing data on the iPhones of the perpetrators of a terrorist attack in San Bernardino in 2015.

He wants universities, like Stanford, to educate a new breed of engineer that he refers to as a “civic-minded technologist.” These engineers would consider ways in which technological advances could serve the public good, while also thinking critically about the impacts of new technologies on society.

In this spirit, Weinstein and two Stanford colleagues, Rob Reich and Mehran Sahami, have begun teaching a new course on the ethics and policy of technology to a large number of undergraduate CS majors. He says it’s critical that these nascent technologists learn from the start to think about the larger implications of their work — even before they write a single line of code. This is because code itself is not value neutral, and technologists must be able to recognize what values are being encoded in the programs they write as well as the competing values that might be traded off. This kind of preparation, he says, will help us as a society to more effectively realize the benefits and minimize the potential harms inherent in new technologies.

In his own research, Weinstein is applying his unique perspective to challenges of global poverty and human migration, where, he says, advances in artificial intelligence and machine learning are changing our understanding of two of society’s fundamental problems.

Join host Russ Altman and political scientist Jeremy Weinstein for an in-depth look at the ethical and political implications of technology. You can listen to the Future of Everything on Sirius XM Insight Channel 121iTunesGoogle PlaySoundCloudSpotifyStitcher or via Stanford Engineering Magazine.

Full Transcript

Russ Altman: Today on The Future Of Everything,the future of technology for the public interest. So when we think of technologists, we often think of driven scientists and engineers with a passion for innovation, trying to invent new technologies that will impact the world. And we think about information technology here in Silicon Valley, we have the great companies, Hewlett Packard, Apple, Google, Yahoo, Nvidia many others. We also have biotechnology increasingly.

The goals of these technologists are of course varied, just like everything else, but include disrupting old ways of doing things and bringing in new ways. Changing the world. There were financial incentives and there were personal incentives based on the passion and the vision of the technologists. They’re typically trained in technical fields not always, with math, statistics, physics, computer science, biology, chemistry, engineering disciplines whatever they may be.

They may not be trained however to think about the implications of their new technologies on the world in a policy sense or in an ethic sense. They might not think about the ways in which their technologies might be used that they didn’t anticipate. Some engage actively with these challenges once it becomes obvious to them that there are these challenges.

And others feel that that’s not my job, I’m gonna let the professionals worry about how to manage the secondary and tertiary ripple effects of my technology.

But these questions are particularly pertinent today. I think you can think of many things on the cover of the newspaper and on TV every day, especially with the rapid advances in AI and machine learning and its rapid deployment in wide areas. I can tell you it’s also happening in biotechnology with the CRISPR technologies where we can now alter humans and alter organisms at will.

Well Jeremy Weinstein is a professor of political science and senior fellow at the Freeman Spogli Institute for International Studies and the Stanford Institute for Economic Policy Research. And he has worked actively on issues where policy and technology intersect. Part of this interest in fact stems from his work with the National Security Council as part of the government and the US Mission to United Nations. He has helped form networks of universities interested in Public Interest technology. And separately has helped create centers focusing on global poverty development and with similar aims.

Jeremy, you have talked about the increasing need for civic minded technologists. What does a civic minded technologist look like? And what I mean is, is this someone who was a technologist but is drawn to the policy implications of technology and perhaps their technology or do you think that these can be technologists who remain in the fray of technology development but also have on top of that fray an understanding of the policy, ethical, Civic implications of their work.

Jeremy Weinstein: I think that’s a great question to start off the conversation and I’d say that it’s both plus one more category that I wanna describe to you.

So the civic minded technologists one part of that equation is a set of technologists who are developing competencies as engineers, as data scientists, as bio-engineers but who are really interested substantively in using the advances of technology to achieve public aims. Aims related to social problems that are shared in society. And I think the question of how we prepare people to use their skills as technologists in those domains by balancing not only the technical competencies that they develop but also the domain knowledge that they need, about social domains, about institutions, about communities, about human behavior. We need to find a way to pair those things. And that’s one part of civic-minded technologists.

But a second part of the civic-minded technologists is we have a lot of technologists who don’t think about the public interest as central to the technology development that they’re doing. And I think that’s entirely fine.

Russ Altman: I know some of them.

Jeremy Weinstein: I can imagine. But I do think that as we educate a set of engineers for the future, a set of data scientists, a set of computer scientists, we should aspire as a university that 100% of the people that we train are in a position to think critically about the impacts of the choices that they make.

The design choices for technology, the questions of how technology interacts with human behavior and even to begin to get their heads around how policies might play a role in mitigating some of the potential harmful effects of technology.

If you’ll give me one more second, the missing piece that I think is critical to Public Interest technology is the technology minded policymakers. Which is that we need to put ourselves in a position where those who are actually responsible for making the choices about how we govern technology know something about technology. And we’re very far away from being in that position now.

Russ Altman: So I think this is on that note. I had the pleasure of looking at your CV and in the distant past, you were — are — an expert in civil wars, political violence, political economy of development, political change. What was it that brought to your attention this urgent need for this civic minded technologist, what set of experiences.

Jeremy Weinstein: Well you could say I have a short attention span but I think really what happened for me is two things in particular.

The first is during the second term of the Obama administration. I was serving as the deputy US Ambassador to the United Nations. And that gave me a seat because the ambassador to the United Nations is a member of the cabinet. Not only in the most pressing foreign policy debates of our time, but also a whole number of situations where technology was playing some important role in revealing to me just the enormous chasm that existed between those people who had responsibility for particular policy domains, in this case foreign policy, and those who actually knew something about technology. Perhaps the best example of this was the debate that we had about encryption. The question in the aftermath of the San Bernardino terrorist attacks about whether —

Russ Altman: This is into the Apple iPhone.

Jeremy Weinstein: Whether the government should be in a position to break into the Apple iPhone in order to figure out whether the attackers in San Bernardino were connected to others in society in the area who might also be in a position to undertake or commit violence.

The national security folks around the table said of course the government should be in a position to break into the iPhone. That in some sense the President United States his most important job is to protect the safety and security of Americans.

The set of technologists that we had in government many of whom sat in the Office of Science and Technology Policy of the National Economic Council, said what are you talking about? Privacy is what we prize. End-to-end encryption is the pathway to the future. We don’t wanna create a backdoor into our technologies that others could exploit just so we can protect citizens in the United States.

I think that was a fundamentally sort of revealing debate about the entirely different world views that existed on the part of policymakers who were responsible for international relations and foreign policy and our technologists.

We saw the same thing play out in cyber warfare when the attack was made against Sony. I remember this room when we came in for this conversation we had to decide how are we going to respond to the Sony Cyber attack. You had a set of decision-makers around the table, none of us knew anything about technology. Then around the back of the room, you had a set of people who really understood cyber warfare, cyber defense-offensive operations and they’ve never been before at the policy-making table. And they didn’t get to speak.

So you saw this gap, and this chasm.

What it meant is when I came back to Stanford I said at this moment of tremendous societal change in relationship to technology, what am I uniquely positioned to do sitting at this university. It’s to march myself over from the social science side of campus to start teaching technologists to think seriously about these issues.

Russ Altman: So I wanna talk about that in a minute.

This is The Future Of Everything. I’m Russ Altman. I’m speaking with Jeremy Weinstein about government, technology and civic clash.

Now those two examples are great because I see them as having two different problems. I think in the iPhone case it sounds like everybody was perfectly clear about their opinion but perhaps where they were coming from in terms of their disciplinary training, gave them a very different opinion about what the right way to go was. Where in the in the second case, it sounds like there might have been a vocabulary problem in bringing in an easy-to-understand if it is even an easily understandable.

Trying to get the policymakers to understand the technical nuance required to make appropriate and subtly nuanced decisions. So is that a fair characterization? I’ll choose different types of kind of culture class.

Jeremy Weinstein: So, one is developing a shared language and a shared understanding about what technologies represent. Of course you’re familiar with the testimony that Mark Zuckerberg gave after Cambridge Analytica in front of senators, distinguished senators, who asked him well what is Facebook’s business model if you give away this service for free. That just reflects some basic ignorance on the part of our policy makers about what these platforms are. About what these technologies represent and how they operate.

We need to build a language for talking in both directions. That position technologists to engage policymakers not to fear them, not to entirely work around them but to recognize that ultimately because the question of technology’s impact on society is going to be a function not only of technology but how society chooses to govern technology.

Technology cannot avoid the conversation with policy makers. Policy makers who are elected by the citizenry of the United States, of other countries and will have to make these choices about whether end-to-end encryption is the way of the future or whether decisions in the criminal justice system should be made via algorithms or whether it should be the case that we treat internet platforms as platforms that are not responsible for content or as publishers that are fundamentally liable for the information that they make available to citizens. Those are going to be decisions that policy makers have to make. And you can’t work around it.

Russ Altman: So you’re back on campus, we have talented young people, we have people with policy aspirations, we have people with technology aspirations. You have educational challenge here which: Is it harder to teach a policy person technology or a technology person policy? How do you even think about that?

When I have my students, they are, one of the reasons the Stanford students are great is they are laser focused on whatever it is that they’re trying to learn. So how do you distract a young technologist appropriately to get them at least conversant in these other kinds of cultural concepts without torpedoing their technical passion and technical focus.

Jeremy Weinstein: So the first thing you do is you enlist incredible allies. And so the first ally was a political philosopher on campus Rob Reich, who was thinking about these issues from the perspective of ethics and technology. I knew that we needed to balance the social science perspective and policy perspective that I could bring to the table with a philosopher.

But we also knew that there probably wasn’t a single CS student that would sign up for a course with Professor Weinstein and professor Reich because they had never heard of professor Weinstein and professor Reich. And we weren’t famous technologists who had six startup companies and had rolled out technology products that were likely to transform the world.

So we had to enlist our most important ally. And that was Mehran Sahami. He’s a superhero on campus, he teaches almost all of our undergraduates Intro to CS, he gives them their first introduction —

Russ Altman: Probably one of the most recognizable names on campus.

Jeremy Weinstein: And so our students all had their initial exposure to computer science through Mehran. And so for Mehran to stand in front of the room and say to students you have drunk the cool-aid, you are developing these competencies and thinking about the role that you can play as a computer scientist in the world and I’m telling you that you need to stop and you need to take a step back, and you need to think. You need to think about questions that don’t have a right answer, you need to think about trade-offs that are embedded in the design of algorithms that you’re developing, or technologies that you’re creating.

That was exactly the kind of elixir that was required to turn on a set of CS students to what’s really a hard task. Which is to make decisions to express their normative commitments and values to debate these issues that don’t have a right answer. There’s no exercise here where you write down a set of code and we correct it and we say well this could be a little bit more efficient and obviously you see how you’re missing something in the mathematical formulae here. When we ask the question of whether —

Russ Altman: This could be stressful for them.

Jeremy Weinstein: It’s really stressful for them.

Russ Altman: They chose their major for a reason.

Jeremy Weinstein: Exactly, they have avoided these things. But when you ask the question of whether Facebook should be broken up, whether it has gotten too powerful, whether replacing our public sphere with a digital public sphere that’s controlled in the private sector in ways that are not visible and accountable to citizens, there is no right answer to that question. There are only the answers that we can come up with as a society, when we enumerate what it is that we value.

But you also asked me about what is it like to teach policy makers and social science technologists, you have to ask Mehran that question. Because I could only sit there and watch them and he’s not a guest tonight. But of course when we taught this first course, it just finished this winter quarter, we had about 275 students. Nearly 3/4 of them were CS majors, over a hundred of them were seniors in CS, none of whom had ever grappled with these issues anywhere in their CS curriculum up to this point.

We attracted a relatively small number of social scientists and humanities folks from around campus, in part because we made the prerequisite basic competence in computer science. Because we mixed in the core in the context of the course a set of technical assignments.

For example, having students audit an algorithm for bias. If we’re going to be using algorithms to make decisions in the criminal justice system, replacing judges with algorithms, then we ought to be able to know whether those algorithms are making fair decisions. So put them in a position where they’re auditing algorithms.

Also put them in a position where they observe how the decisions that are made in ranking algorithms, for example, on our internet platforms, create filter bubbles and echo chambers that have magnified effects on society what information people hear what diversity of sources.

So we mix technical assignments with philosophy papers and policy memos.

The social scientists and humanities folks in the room, loves the philosophy paper. They love the policy memo. The technical assignments scared them.

Whereas our CS students came across a policy memo or a philosophy paper and asked the question what is a philosophy paper? What is a policy memo, what is the structure of argumentation in a policy memo. And so really it was this communication across two different cultures and it required this integrated teaching approach of faculty from different disciplines.

I think our challenge in part in the public debate is that we have tech enthusiasts who emerge from technology who say that technology is gonna revolutionize everything. Then we have tech critics who write polemics either because they’re dystopian technologists or they’re non technologists who just see all of the potential trauma that’s going to be inflicted on society. But all of our challenge is actually not to take either of those polar views but to make a set of choices that are gonna help us maximize the benefits and mitigate the harms.

Russ Altman: I take it as super good news that in the, and I didn’t know this that in the class there, I think you said around a 100 social science philosophy type students who had fulfilled the prerequisites for the class in terms of computer programming and yet whose heart was squarely in sociology philosophy. Because that means there is a upcoming generation of people who are going to be dually comfortable with both of these areas.

Jeremy Weinstein: I mean, I think it’s almost impossible to be a student at this university, living in Silicon Valley. I’d say reading the newspaper but people don’t read the newspaper. So picking up your headlines on Twitter and not realize that we are in a different moment for technology. And we may have a set of students who choose a more narrow education where they position themselves as a pure engineer, a pure programmer. But I’d say that many of our students recognize that the future direction of technology is gonna be one that puts technology in interaction with society in a serious way. And that negotiating that access that generating a user base for your technology that thinking hard about how government or other institutions are likely to respond to your technology, that if you’re not doing that, you’re not gonna effectively achieve any of your goals.

Russ Altman: This is The Future Of Everything. I’m Russ Altman. I’m speaking with Jeremy Weinstein about technology and society. I do wanna move to your Center on Global Poverty Development because I don’t know if it’s a totally separate thing or if for you it’s all part of the same deal. So can you tell us about the goals of that Center, and what are the priorities currently?

Jeremy Weinstein: So I’m a part of a lot of different things on campus. One is a Center on Global Poverty and Development because as you said, historically my work where I started as a political scientist, was as a scholar of African political economy. Motivated by understanding the challenges of poverty and inequality and violence in developing countries. And that took me in the early part of my career to working on issues of political violence and political change. Looking at ethnic politics and understanding how ethnic identities shaped the policy-making process and the developmental trajectory of countries.

More recently since I came back to Stanford, I’ve created with a set of colleagues, a lab that’s focused on immigration policy. Because one of the central issues that I grappled with as deputy UN Ambassador was the migration crisis of 2015 and 2016. And of course we’re still living with the challenges not only of that period of Syrian out migration but also the challenges that we have at the southern border. It’s among the most contentious policy issues that we confront.

Now as a social scientist, it’s often funny for me to hear people say this new field of data science is emerging and maybe social scientists should connect with data science because social scientists have been doing data science for a long time. Statistics is actually central to most of the work that we do. And so some of the advances that I see in data science are really powerful for helping us think about development questions. And let me just give you two short examples.

So on the development side, funded by the Center for Global Poverty and Development, we’ve been thinking very hard about the question of how you measure difficult to measure things in places where it’s hugely expensive to undertake measurement activities. Things like measuring poverty and its changes over time in Africa including in rural villages that aren’t even easy for their states to access or the quality of infrastructure in those environments. How do we get a beat on these really important underlying trends in the absence of investing extraordinary amounts of money and what we do now which are many month long household surveys that visit people and gather this information?

Well, advances in machine learning make it possible for us to use training datasets that tell us what the level of poverty is in particular villages or the quality of infrastructure. And to learn from the relationship between that data and the data that we can get from satellite imagery. And the underlying mathematics of the satellite imagery and to use machine learning algorithms to generate highly accurate predictions about any given village about whether they have electricity, the quality of their water and sewage system, the level of income of the households in that village.

Russ Altman: So an image analysis of satellite images when appropriately trained can give you all of this.

Jeremy Weinstein: So building on the connection that sort of machine learning algorithm can generate, a convolutional neural network between the underlying training data and this visual imagery data. Because we have visual imagery data for everywhere for many years for many points in time. And we can’t pick out necessarily whether a house has power or not.

In fact, the approaches that we’re piloting are more effective at figuring out who has power than satellite imagery that simply measures nightlights. Which has been, because of course there are lots of places that simply aren’t bright enough to be captured in satellite imagery. But when you introduce the training data that comes from the raw household surveys that we have, you can generate these very powerful predictions.

The second example I was gonna give you was on migration. Which is a real issue of passion for me. And here, this is a field where there are a lot of strong views, a lot of strong normative commitments but not always a lot of evidence brought to the table to guide our decision-making.

So one of the issues I brought back with me from government was the question of when refugees arrive in a country how do we position them to succeed economically? How do we maximize the likelihood of their success. And one way you do that is by investing a lot in them. Facilitating their acquisition of language, providing job training, allowing them to wreak credential. Those things are very expensive and difficult to scale. The more refugees you bring in, the more you need to spend on those kinds of investments.

But one of the things that we discovered bringing data science tools to the table in partnership with the US government, the Swiss government and others, was that there’s a huge effect on the likelihood that refugees integrate in their host community that’s a function of where you send them and how the characteristics of where you send them interact with their own personal characteristics.

Russ Altman: And when you say where, do you mean which country or which town?

Jeremy Weinstein: We know it would definitely be true with the country level, but we can discover this even within which town in the United States, which city. Which city or province in Switzerland. And when you take that information into account, and build it into a machine learning algorithm that can basically extract from someone’s profile and then generate a set of probabilities with respect to integration outcomes like the level of employment —

Russ Altman: So these things like their education level, their skill set in which country —

Jeremy Weinstein: The language, et cetera. Exactly, and then the conditions of the locality that they’re in. You can then optimally match people to places, rather than the arbitrary way in which we make these decisions now. And increase the expected employment rate by 50% to 70%.

Russ Altman: So you might even call this precision migration.

Jeremy Weinstein: Precision migration. And these are people keep in mind who don’t have local ties. Who are arriving in the United States or in a European country and they just wanna succeed. They’re fleeing a country in war, they’re being given permanent prospects of resettlement. they just want to do well. And you wanna put them in a position to do well and what I just described to you is a scientific approach to allocating people to places that is costless.

Russ Altman: So is this still a hypothesis or do we have some initial returns about the success of precisely saying, I think you’re gonna do well in this city, you go to this city. Presumably they do it with families or clans for social coherence. But do we have data? Is it going to work?

Jeremy Weinstein: We do, so this is why it’s really important to bridge the computer science or data science with the social science. Because in a backtest, we can demonstrate these enormous returns. If you mind massive amounts of historical data and look at what kinds of gain you could achieve for people if we had allocated them to other places, you realize these enormous returns. And that’s the kind of exercise that we put our computer science and data science students through all the time. Sort of predictive exercise. The question is what happens when you roll out a technology like this in the world. Why might things be different. How our political institutions gonna interact? Are people gonna even go to the places where you’ve sent them, are they gonna stay there once they arrive to realize these longer-term returns.

Russ Altman: It also seems to have a little bit of an Uber and Lyft thing where you might have by affecting the traffic, you have to update your model because you’ve changed the conditions in that city. You just sent ten thousands of people —

Jeremy Weinstein: We’ve designed the algorithm in such a way that it’s constantly updating on the basis of new outcome data that comes in so that you don’t send so many Syrian doctors to a place that has tons of doctors. You then reallocate them to other places in the country.

But in order to really test this, you don’t just need to do the backward-looking work, you need to do the prospective work. And the way you do the prospective work is the way a social scientist does. Which is working in partnership with the government of Switzerland and the government in the United States. Where you actually begin to randomly allocate people into a treatment group where they’re allocated on the basis of this algorithm or a control group where they’re allocated in the arbitrary way that’s —

Russ Altman: We do that a lot in medicine.

Jeremy Weinstein: Exactly, and so like a doctor. Like clinical trials —

Russ Altman: That’s a randomized controlled trial.

Jeremy Weinstein: That’s the way we think about testing social policies and social programs and interventions that we mount in the world and that’s what we’re now doing with the refugee matching algorithm in four different countries.

Russ Altman: Fantastic. So as we finish up, we have about a minute and a half left. I’ve looked at your previous work and it included things on corruption, global development, counter-terrorism, nuclear non-proliferation and cyber threats.

Jeremy Weinstein: All fun issues.

Russ Altman: So when you take your current focus on technology and civics, which of the, I don’t know if this is a fair question but which of these do you consider the most pressing set of problems where this bringing together needs to happen. I’m sure it’s all of the above but where do you place your bets.

Jeremy Weinstein: Russ, how do I even answer that question because I really think the moment that we’re at right now is one where the potential promise of technology is almost unlimited but the harms associated with technological change can no longer be ignored. And so we need to think about that in every domain that you describe. We need to think about the vulnerability that comes from our movement to a digital economy and a digital information system, we need to think about what offensive cyber operations look like and what kinds of escalatory dynamics they generate. We need to think about the shift in the direction of autonomous systems and the distributional consequences of that in our own societies and what our obligations are as citizens and citizens and the governments that we choose to people who see their jobs, the need for their jobs obviated by the advance of technology. I mean, the list is so long.

The challenges that if we educate people in silos if we educate people not thinking in a clear-headed way, not just about the cool new technology they’ve created but about the consequences that may be generated in society, we are gonna be in this really challenging place that I experienced in the White House Situation Room. Where the technologists had one view and the policymakers had another view and they never before discuss them.

Russ Altman: Thank you for listening to The Future Of Everything. I’m Russ Altman. If you missed any of this episode, listen anytime on demand with the Sirius XM app.