Skip to main content Skip to secondary navigation
Main content start

The future of inequality

With the rise of extreme inequality, one researcher is optimistic that promising reforms can be identified via new quasi-experimental methods, including those relying on AI.
Star vs. circle on balance scale
All inequality is not equal. | Shutterstock/leolintang

Sociologist David Grusky argues that all the usual debilitating debates about inequality can be sidestepped if we focus on the worst forms – those rooted in cronyism, racism, and nepotism – that everyone can agree are nothing more than a pernicious transfer of income or wealth from the powerless to the powerful.

To fight this “worst form” of inequality, Grusky shows how powerful interventions can be identified with new quasi-experimental methods, including those that use naturally occurring or AI-generated doppelgangers instead of very expensive randomized controlled trials. “We’re leaving a lot of talent on the table. And the cost is profound,” Grusky tells host Russ Altman about the price of inequality on this episode of Stanford Engineering’s The Future of Everything podcast.

Listen on your favorite podcast platform:

Transcript

[00:00:00] Russ Altman: This is Stanford Engineering's The Future of Everything, and I'm your host Russ Altman. I thought it would be good to revisit the original intent of this show. In 2017 when we started, we wanted to create a forum to dive into and discuss the motivations and the research that my colleagues do across the campus in science, technology, engineering, medicine, and other topics. Stanford University and all universities, for the most part, have a long history of doing important work that impacts the world, and it's a joy to share with you how this work is motivated by humans who are working hard to create a better future for everybody. In that spirit, I hope you will walk away from every episode with a deeper understanding of the work that's in progress here, and that you'll share it with your friends, family, neighbors, coworkers as well. 

[00:00:49] David Grusky: We have a profound and deep commitment in this country to equal opportunity. It's a normative commitment. We think that's how the world should be. It shouldn't be the case that the birth lottery determines, or not determines, but affects your fate, and also, we're leaving talents on the table and our GNP's being driven down because it matters so much. So, there's lots of reasons why we shouldn't say, oh, that's just the way it is. We should double down, do our work, build a social infrastructure, that's, that's what we deserve.

[00:01:22] Russ Altman: This is Stanford Engineering's The Future of Everything podcast, and I'm your host, Russ Altman, if you're enjoying the show or if it's helped you in any way, please consider rating and reviewing it on the platform that you're listening to right now. We like to get a 5.0 if we deserve it. Your input is extremely valuable and will help others discover the show and learn about The Future of Everything. Today, Dave Grusky will tell us that all inequality is not equal. Wrap your head around that some inequality is justifiable, whereas other inequality is quite bad. He'll tell us about it. It's the future of inequality.

[00:01:57] Before we get started, another reminder to rate and review the show so that others can discover it and enjoy it. So, inequality is bad, right? All people are created equal. We should all be the same. Well in the eyes of the law, that's reasonable. But economically, it's not always like that. If you are more productive at work, you show up, you work hard, you have a colleague who doesn't show up, doesn't work hard, your paycheck might be bigger, that might be an inequality of paycheck. That's perfectly reasonable. On the other hand, bad inequality where you're taking advantage of your power to take money that you really haven't earned in any way, we've all seen situations like that, that might be truly bad and insidious for society. Well, Dave Grusky is a professor of sociology and economics at Stanford University and an expert on inequality. He studies it, he thinks about it, and he wants to reduce it.

[00:02:57] Dave, I'd like to start out by just asking you, how did you decide to focus on inequality in your research? 

[00:03:03] David Grusky: Well, I have a lot of stories about that and it's hard to know which ones really are on the mark, but let me go with what I think might be true. I mean, that's a tough question actually, right? Uh, uh, I, you know, I think a lot of the problems that we're facing as a country, and as the world at large, are due to the vastly unequal distribution resources. Some people are getting shafted and that's gonna have some, some downside consequences that are pretty bad. I think we're, we're seeing that in a very stark way right now, and I think if we want to address problems at their source, we would think long and hard about vast extreme stark inequality as that critical source. 

[00:03:44] Russ Altman: Great. Okay. So, um, I know that one of the, one of the things that you're able to do, um, is define inequality for us. And, and, and in particular, I know that you've written about two types of inequality and that it's very fundamentally important that people understand these. So, I'd like to take some time to understand what they are and how we should think about them. 

[00:04:05] David Grusky: I, you know, if you speak very colloquially and you know, we can get deeper about this 'cause there are normative judgments behind this colloquial distinction that I'm about to make, uh, but if you speak colloquially, uh, I think it's important just to think about good inequality and bad inequality. So, so what do I mean by good inequality? That's just the compensation, the extra compensation that goes to workers who are really productive. So, you know, if you have a doctor who's great at saving lives, or a contractor who builds lots of great houses, uh, inventors who are responsible for amazing inventions, those are creating product that people want and are willing to shell out money for. And most people think it's fine if they're compensated in accord with that amazing work they're doing. 

[00:04:48] Russ Altman: And just to, just to make sure I understand that. Does that extend all the way to the like, uh, CEO inventor, uh, who has created a whole new company that has, uh, billions and billions of dollars of capitalization? We count that as a good inequality. 'Cause they did, they had the idea, they executed and now there's an inequality or a big inequality in their net worth versus other people's. But are we calling that pretty good? 

[00:05:14] David Grusky: It depends. So that gets you right to the question of the distinction of what good and bad. It may be good, it may not be all good. It may be partly good. So, let's think about that. So, so what's bad inequality? That's when you're getting compensation that goes to workers or firms or inventors, that, that's in excess of their, of the, the value of their product. And usually that means they're using power to, to leverage that excessive compensation. So, let's go back to that CEO. Uh, let's say the CEO did an amazing thing, but now they've packed the board with their cronies, the board of directors. And their cronies, they're getting a lot of benefits from being on the board. And in return they're, they're going to exceed to a, to a very excessive compensation package, in excess of the value of the product that that CEO is actually delivering to the firm. So, in that case, part of the inequality is good, uh, because it, it reflects the, the extraordinary product of the CEO, but part of it is a reflection of the CEO's power that they're using to extract resources in excess of the, the value of their product. 

[00:06:26] Russ Altman: Okay. So, we have, uh, if I can summarize very briefly, you, you just said it very clearly. The, uh, good acceptable inequalities are because of value creation differences across different people and different organizations. The bad is taking advantage of I'm, uh, it sounds like it's taking advantage of power mostly to, to extract money that maybe is not value. And I know you've talked, you've talked about this in the context of the GNP, the Gross National Product. Um, uh, good inequality contributes to the GNP? 

[00:07:01] David Grusky: Absolutely, absolutely. We all benefit because we have a more powerful, stronger economy that has all sorts of downstream benefits. Uh, good inequality is all about building a GNP that we, that's large, and that we, we collectively benefit from. Bad inequality, it's just a transfer of money from the weak to the powerful. No benefit to anyone but the powerful. 

[00:07:22] Russ Altman: Right. Now, another thing that you've written about that's very intriguing is that you said, essentially nobody should be in favor of bad inequality, except perhaps those few who are benefiting. And I guess my question is, is it really a few, or do we see this so widespread across all spheres of life that it actually adds up to a ton of people who are benefiting from bad inequality?

[00:07:49] David Grusky: It's more the latter. Uh, there's a lot of people who are benefiting from bad inequality. Uh, we gave this one contrived example of the, of the CEO, but there are many examples. Uh, there could be, for example, a company town in which there's just one buyer of labor. They have extraordinary leverage by virtue of that, right? You have to sell your labor to that company or no one. So they exploit that leverage and get, and get excessive returns. They can get, drive those wages down to the rock bottom, right? Uh, or it could be occupational associations that are handing out accreditation and, and only people who have that accreditation can actually practice that occupation. They can restrict this, the number of accreditations that are handed out. Make that, that, that body of occupational workers scarce and drive-up wages. So, there's lots and lots of this type of bad inequality out there. There's another type, if I could just go a little bit further. 

[00:08:45] Russ Altman: Absolutely, absolutely.

[00:08:46] David Grusky: There's another type that's a bit more subtle, uh, and it's, but I think it's, it's super important, and that's a type of bad inequality that's based on the birth lottery. The winners of the birth lottery are getting unfairly compensated in some cases. So, what do I mean by the winners of the birth lottery? These are people, you know, we know all about how it works. You know, you're born, the stork takes you to your new house, right? And the stork's flying, you know, and it, it could, it could drop you into, into a family that's super rich and into a neighborhood that's super, uh, uh, full of amenities of various sorts. Or, or into a family that's, that's low income into a neighborhood that's not, not as attractive on these amenities. So that's, the winners of that birth lottery, the ones who get into the rich family and a great neighborhood, right? What does that do?

[00:09:37] Well, that means that you're gonna have probably great training, you know, good primary school, good secondary school, you have social networks that are wonderful. Uh, if your parents are rich, let's say they went to Stanford and they did really well, uh, now you're gonna have a legacy advantage that might help you get into Stanford yourself. Let's say you do, you, you know, you go to these great schools, you do wonderfully, you have wonderful parents who are well off and can, can make huge investments in you. Uh, you get into Stanford, you major in CS, and, and you, and you do really well. You're, you're great. Uh, uh, you make a lot of money. You might say, well, they're making what, what they deserve. Well, here's, here's the rub. They've had a lot of privilege all along the way.

[00:10:20] And so folks who were born into low-income families in less advantageous neighborhoods didn't really have the same advantages and didn't have the opportunity to get into, say, Stanford. They may be great if they had gotten into Stanford. That's the counterfactual. If they had gotten into Stanford, they would've been even better in CS than the, than the rich kid who did and made even more money. Uh, but they've been locked out of the competition and that's kind of talent left on the table, and our GNP is driven down because of that, right? So that's bad inequality too. It looks good 'cause they've got, you know, great credentials. It looks good. But it's not good. It's not all good, right?

[00:10:58] Russ Altman: And, and that is a really tough one. I mean, I just, if we can pause for a moment on that one, because the folks who were lucky, I imagine, and, and in fact you and I might be staring at two such people right now on the video screen. Those people might think, but yes, maybe I was lucky, but I also had to work very hard. There's many ways I could have failed, and I didn't. And, and, and so, um, what are you gonna do? So, so the question is are, do, do we feel so confident in that next step that we're actually gonna try, or are there proposals to actually try to even that field, and say, we need to help the, um, the folks who were born in the less privileged situations, we need to give them an equal chance versus, you know, the world has luck and not luck, and we're just gonna let that dice roll and we're gonna deal with it. And how do you, as a, as a sociologist and economist think about, that's a very difficult conversation, and I think we've all seen two people have that conversation and it's a, an extremely tough one. 

[00:11:56] David Grusky: Yeah, I think, you know, we have a profound and deep commitment in this country to equal opportunity. It's a normative commitment. We think that's how the world should be. It shouldn't be the case that the birth lottery determines, or not determines, but affects your fate, and also, we're leaving talent on the table, and our GNP's being driven down because it matters so much. So, there's lots of reasons why we shouldn't say, oh, that's just the way it is. We should double down, do our work, build a social infrastructure. That's, that's what we deserve. So, yeah, I don't think we should let it go by. And I should say it's partly because it's really big, this kind of illicit advantage. So just to give you one statistic.

[00:12:32] Russ Altman: Ah, give me the magnitude of the advantage. Oh, good. Tell me about that. Yes. 

[00:12:36] David Grusky: Yeah. So, if you looked at the probability of attending an Ivy League college, kind of a broad definition of Ivy League, that includes say Stanford, Ivy League plus, uh, you're 77 times more likely as a child in the 1% than a child in the bottom 20% to get into an Ivy League college. So, it's a huge advantage. And moreover, it gets worse. The low-income child who has exactly the same SAT score as a high-income child, they're also much less likely to get into, into, into a place like Stanford. So, there's huge advantage in play here. And so, we're leaving a lot of talent on the table, and the cost is profound, so that's why we ought to get worried about this.

[00:13:15] Russ Altman: Great, great. So, so you, you've, we've been talking mostly about financial opportunity, but there's also status, social class. Tell me how those factor into inequality. Do we mostly have to worry about money and economics, or is status and social class in some ways different or require perhaps different levers to be adjusted?

[00:13:36] David Grusky: Yeah, that is a great question. And actually, there's a line of research going on at the Center on Poverty and Inequality, which for I'm the faculty director, uh, it takes on exactly this question. I think it's a bit of an open question. I worry a lot, and we're examining whether or not this worry is on the mark, I worry a lot that people make trade-offs. So, for example, the kid who's born in a very rich environment might wanna be a professional dancer, and let's say their parents can afford to send 'em off to Julliard and get trained up. Uh, now let's say they succeed, they're not gonna get paid much, right? Uh, and it will look like, oh my gosh, it's the American dream. There are born into rich circumstances, just weren't any good and they didn't get much money.

[00:14:18] But that would be a misunderstanding, right? They're just trading off the amenity of being a professional dancer, which they value greatly, for having a lot of money. And so, we need to take both types of mobility into account, uh, sorry, both types of amenities have to be taken into account, both the amenity of having money and the amenity of having a job that means a lot to you. Uh, and we have to look at both of those together in order to understand how much illicit advantages play in this world. So, we're taking that on now at the Center on Poverty and Inequality. I think the answer is not yet clear on that very important question. 

[00:14:52] Russ Altman: You talk sometimes about good jobs and bad jobs, uh, and I think I and I, I wonder if you can kind of, uh, describe, uh, in, in the context of this kind of conversation, um, what, what would be a good job and, and, you know, people are thinking about this 'cause now there's this other thing, which is AI. And so, a good job and a bad job has changed because now people are looking at the degree to which they're protected from an AI revolution that might take their job. But I think even before that, you were thinking about the different effects that jobs can have on economic and social opportunity. 

[00:15:27] David Grusky: Yeah, so I was mainly focusing up to now, when I was talking about inequality, I was focusing on economic inequality, and that's what most people instinctively think is the most fundamental type of inequality, is the type of inequality that's been taking off in, in the US and many countries throughout the world, is the type of inequality that has massive implications for one's life chances. And so, it's a really profound and important form of inequality. And that's why I've been focusing on that and talking about good and bad inequality in economic terms. But that's not to, to discount these other sorts of amenities about which people care, but is perhaps somewhat less fundamental, often status and prestige flows out of having money. There are other ways to get it, no doubt about it. We're involved in a profession that maybe has status and prestigious in excess of the, of the compensation we get. Uh, but, but nonetheless, I would say that the core, the fundamental inequality is economic. 

[00:16:20] Russ Altman: I wanna start the conversation about what we do about all this. 'Cause I know that in addition to studying what is, I think that you've put a lot of thought into what can we do to try to narrow some of these gaps? And so, let's start that conversation. Um, how do you approach it as an academic there? It's such a multifaceted problem. It requires you to get many people on board governments, non-governmental organizations, certainly, uh, industry. How, how do you think about, uh, getting people moving in a direction that you think might be profitable, and I use that word with some irony, for the country? 

[00:16:55] David Grusky: Yeah, yeah, yeah. Um, well, let's think a bit about what we do now because of all those problems that you've mentioned. We do, we do remediation in a very blunt way, remediation, and it doesn't distinguish between good and bad inequality. What do we do? We mainly do progressive taxation, right? So, what we're trying to say is, well, the, you know, the, the market will do what it does. We'll get paychecks of vastly different sizes. Some of it's because of bad inequality, some of it's because of good inequality. And now, we'll just look at those, the sizes of those paychecks, and we're gonna do some progressive taxation. Those who make a lot, those who earn a lot, will be taxed at a bit, at a bit higher rate than those who will be those who, who, who, who earn less. A very blunt instrument, right? Super blunt. Why is it problematically blunt? Because it doesn't distinguish between good and bad inequality. The person who get, who has that, you know, high earnings might be garnering higher earning simply because they're making a lot of great product that people want. Or they could be that CEO who's packed the board with cronies and is getting excessively compensated. Or the, the, the member of an occupation that's held the man down really low, uh, uh, held the number of people who get the accreditation down to a really low number, created artificial scarcity, and they're getting paid more than they deserve. So, there's lots of people who are, who are getting taxed at that high bracket who are productive and some who aren't. It's a blunt instrument. And then people say, well, you know, I don't know, but I wanna do this, 'cause you're, you're, it's kinda like a bad cancer treatment. A cancer treatment that, that kills the good cells as well as the bad cells. All the, all the, the work in cancer treatment about targeting the cells that are cancerous, right?

[00:18:40] Russ Altman: Yes. I'm also, I'm also guessing that you can have people who mix those two kinds of inequality to their own benefit. So, imagine if you are actually doing some things that are good inequality, you're making, you're increased, you have increased productivity, you're doing a bunch of things that are worthy of, um, you know, of merit. And then at the same time, you're also doing some of the bad inequality stuff. That makes the dissection of what your tax rate in, in this example that you've just, that makes that dissection incredibly difficult, doesn't it? 

[00:19:11] David Grusky: Yeah. Yeah. So, you don't wanna make it kind of an individual problem. We wanna fix our institution so they're generating less bad inequality, right? So that's why taxation, well, we, you know, we don't have taxation uh, that's very progressive at the end of the day because we can't agree on it. 'Cause it's, it's very blunt instrument. It's doing some good work, it's doing some bad work. It's all mixed up in a horrible mess. Uh, and so we end up with taxation that's not very progressive. You know, like the bottom 50% of the income distribution pays 25% of their income in taxes when the dust settles. All different types of taxes. Middle class, the upper middle class pays a shade more between 25 and 33%. And then the top 400 families, they pay 23%, right? So, so it's pretty flat. Uh, we're not really progressively taxing. We're not really getting much work done. I think in part there's a lot of reasons why, but in part, 'cause we don't think it's a good instrument. It's blunt, so we gotta figure out how to do it right and well and, and really targeting like a good cancer treatment target the bad inequality. That is our job as social scientists, I would say. 

[00:20:16] Russ Altman: This is The Future of Everything with Russ Altman. More with Dave Grusky next. Welcome back to The Future of Everything. I'm Russ Altman and I'm talking with Dave Grusky from Stanford University. In the last segment, Dave told us, somewhat surprisingly, that there's good inequality and bad inequality. The bad inequality, however, is really bad and forms divisions in our society and gets people frustrated and treated very unfairly. How can we address the bad inequality? That's gonna be the topic of the next segment.

[00:21:01] So Dave, I guess the obvious question is how do we go about targeting bad inequality so that we can basically lift all boats, increase the GNP, but also have people feel, feel like the world is more fair in, um, both compensation and in general life.

[00:21:19] David Grusky: Yeah. Yeah. That is the million-dollar question. Uh, so, you know, just slapping taxes on those who make a lot doesn't get that targeted work done. It's really hard to get the targeted work done because we have to go deep into the institutions that generate our paychecks and figure out what it is in those institutions that's generating these vastly unequal paychecks and in particular, vastly inequal paychecks that are due to the, the, uh, illicit use of power and leverage, right? That's tough institutional work, but it's, you know, it's the bread and butter of social sciences, should be able to get it done now. How do we do it? The main kind of distinction that's made is that we're gonna do predistributional work rather than redistribution work. By predistributional work, it means that we're looking at the institutions that generate pay. And it's really great if you can get paid more equal 'cause people think that's what they earn. You know, when the long arm of the state reaches and it takes your money, that's illicit.

[00:22:16] Russ Altman: People do not like redistribution. It rankles.

[00:22:21] David Grusky: So, we need to get those paychecks more equal. So let me give you an example of what would be a good predistributional reform. Very simple. Uh, and there's many others, but let's just say that there's an employer who's discriminating against a, a given group. They don't like this group. It's not that the group is less productive, they just don't like 'em. They may think they're less productive, but they're not. Uh, and they don't like 'em. They're, they're prejudiced. Uh, now this, this so-called taste for discrimination against a particular group or set of groups means they're gonna overpay for the labor of the preferred group. Everyone wants that preferred group. They all, they all have the same discriminatory prejudice against another group. And so, they're gonna, they're gonna all gang up and kind of bid up the prices of the preferred group.

[00:23:05] And that's deeply inefficient. We're overpaying for labor. We, you know, you'd be better off going for, for, for anyone who can get the job done, not just your preferred group, right? So that's a type of, of, of illicit advantage that, that, uh, uh, is gonna harm the total GNP right. So, what do you do? Well, you could, you, in this case, it's kind of simple. It's a contrived example. You could install or enforce discrimination and, you know, discrimination law, right? Uh, you, you, that's just, and so that's pretty straightforward. Um, but there's lots of other types of, of pre distributional reforms that you might wanna think about. For example, there's a lot of work now on neighborhood level reforms, as we talked about this more subtle type of, of bad inequality that arises because people, the stork drops them into a really high amenity neighborhood.

[00:23:53] So we can do a lot of work to try to equalize neighborhood conditions. Uh, eliminating exclusionary zoning that leads to having a, you know, really rich houses all in one neighborhood and really, uh, small and, and, and, and low-cost houses in another neighborhood. If we get rid of exclusionary zoning, we can, we can integrate neighborhoods and make it possible for the to be more mixing. Or you could do school level reforms. You could equalize school spending across primary and secondary schools. There's lots of things you could do, but it's all about going deep into the institution to generate this illicit inequality and rooting out the problem. 

[00:24:31] Russ Altman: Now, uh, those, uh, some of those things have, have been tried and like for example, we do have anti-discrimination rules. We have some zoning rules. Um, they, I, I, I'm gathering, you would argue that they haven't worked, or they haven't been strong enough yet. Um, and one of the reasons I think it's obvious, as you pointed out in the first segment, that the people who benefit from bad inequality often have power. And so that's when you start to do these things, that's when they get the most aggressive at defending their turf. Um, have you thought about how to get them to back down a little bit? Are they going to back down in the face of reason or is that not likely? You're a sociologist, you know how people react. 

[00:25:12] David Grusky: Yeah. So, I think there's two things that need to get done. First off, we have a boatload of, of programs that are supposed to take on this problem, but we haven't yet done the evaluation research that shows which of these work and which of them don't. And it's hard to combat the, the folks who are pressing their interests if you do not have rock solid science-based evaluations that can say, this is what we need to do. It's been shown to have massive payoff, and it's just a matter of, of overcoming entrenched interests. If you don't have that science at your back, uh, it's gonna be harder to take it off.

[00:25:54] Russ Altman: Great. So, tell me, and I know this is the cause for hope for you, so tell me about why you're hopeful, what has happened and, and what is happening that makes you think that we will be able to gather this data? I love what, the phrase you used, rock solid data, that even the most skeptical, entrenched interest would say, yeah, I'm, I'm losing money because of this policy I have. 

[00:26:15] David Grusky: Yeah, so the key thing is we need to, we need to evaluate what works and what doesn't work. That means we need to do causal inference. And often the social sciences has, has been seen as handicapped because the gold standard for causal inference is a, is a randomized controlled trial, right? You, you, you allocate people to the treatment group randomly in the control group, and then you see how they fare. And the difference tells you whether or not the, the treatment is working. In the social sciences often, you can't do that 'cause it's very expensive or it's unethical or all sorts of other reasons. And so, we've been, we've been stymied. But there's now the rise of quasi-experimental methods that, that take into account natural randomization and allow us to carry out a natural experiment. They've been immensely powerful. We're getting better and better at the job of, of exploiting these quasi-experiments and then being able to sort out, you know, what works and what doesn't work. I can give you a few examples. 

[00:27:12] Russ Altman: I would love to hear some examples. 

[00:27:14] David Grusky: Okay. Uh, of, of kind of cutting edge quasi-experimental methods. Uh, they may or may not bear out, but it'll just give you a sense of the excitement in the field. And I'm calling these the doppelganger version of quasi-experiments. And, and so the idea here is that you wanna find for every person, they're double, they're doppelganger. And there are basically two ways you can find a doppelganger. If you have a huge data set, just, you're likely to find someone that's very similar to me somewhere in that data set. You need a lot of data. I'm a rare and special person. Uh, but there's gonna be someone that's basically my doppelganger. You gotta, you need a big data set to do it, and you just find that person. So, if I'm in the treatment group and it's not randomly assigned, that's the problem, right? You need to find someone that's like me anyway.

[00:27:59] So we mimic random assignment, but you've got a big dataset. You can do that and work by Raj Chetty and, and others, have shown that actually when you have a super big administrative dataset, you can find that doppelganger and they've actually taken known RCTs where they did randomly allocate people to, to the, to the treatment, and then to a control group. They know what happened, but then instead they got a pseudo control group where they just looked for someone who's similar to, to the person who was in the treatment group. They just had a huge dataset. They find someone who's just like that person and they have a pseudo control group, and you know what? It nailed the RCT result. It just nailed it. 

[00:28:39] Russ Altman: So that's exactly, lemme make sure I understand though. It has to be my twin except for some important intervention that by chance I got and that they didn't, so that we're twins except for the thing that you're studying. So it might be, did he get a chance to go to a fancy college? Uh, other, in all other ways, we looked just the same. But then there was something, uh, that was important that differed between us, and then that allows you to study the impact of that thing where normally you would, and this is what you were saying, would be unethical. You say, okay, you're gonna go to this fancy college and you are not. That's not gonna fly. That's unethical, but we, it that might have happened by chance in society. And if you have a big enough database, you can find those matched except for one thing or two things that you then can study. I just wanna make sure I get that right.

[00:29:26] David Grusky: Exactly. Now there are other types of quasi-experiments that can be deployed here. For example, you could say that the last person who didn't get admitted to that school, they just were just one cut blow admission. Well, they're not gonna be any different. That's a trivial difference, right? So, we'll take a look at those people. So, there's lots of tricks like this, but, but the doppelganger trick is one that's been explored, and it seems be very promising.

[00:29:49] Russ Altman: Um, what, what else, give you, you said you had a couple of examples. I wanna hear more. These are good.

[00:29:53] David Grusky: Yeah. Yeah. So, there's a second type of doppelganger trick. Now this is getting even, even more, uh, speculative. Uh, but, but I wanna share with you my enthusiasm and excitement for what's happening in the social science right now and how we're at the brink of major headway. So, another, another way to go with respect to doppelgangers is if you don't have access to a big data set and you can't find that identical person safe for the treatment, right? If you can't find them, you make 'em. So, this is called silicon sampling, and this is how it works. You can administer a long form, uh, qualitative interview with someone, okay? In fact, the Center on Poverty and Inequality is working on the American Voices Project, it's one type of these long form extended conversations. You ask people, tell me the story of your life.

[00:30:45] You talk about their family, their religion, their politics, everything. It's a really intimate, cathartic experience. You take these, now we haven't done it with the American Voices Project because, because, uh, we don't have a consent agreement for that, but, but you could take that kind of conversation and then you feed it to a large language model and you say, be this person. So, they, they're reading this, this extraordinary long conversation that, that, that conveys what this person is all about. And then you tell the LLM be this person. So now you could, you could, say now there are two of these people. One will be assigned to the treatment. Let's say you have a little nudge treatment, like you wanna make people less discriminatory. Uh, and you think if you, if you talk to them about how this group that they're prejudiced against actually, you know, actually is very productive.

[00:31:31] Russ Altman: Now are you gonna do this on, is the intervention gonna happen on the AI LLM large language model-based model or on the real person, or it doesn't matter, or you do both?

[00:31:41] David Grusky: Well, the simplest, cheapest way is your treatment group. You convey the treatment to the, to, to one of the clones, right? Large language models, you know, try to be this person. You convey the treatment that way, but you could have another clone that didn't get the treatment. Identical person, right? So, a perfect matching, right? They didn't get the, they, you don't read the treatment to them. Uh, where you say, hey, you know, this group you're discriminated against is super productive. Uh, they don't get the treatment. Uh, and then you see to what extent then the, the behavior, do they, for example, if you give 'em a CV, uh, a resume from, you know, from two members.

[00:32:20] Russ Altman: So, this is fascinating. From what you've seen and, and we're running out of time, but I just want to get this final point 'cause it's so fascinating, from what you've seen, can an LLM, can a, can a chat bot, when it has seen a long bit of text about my life, can it do a pretty good job at then making choices and decisions that are similar to the choices and decisions that I might make in my life? 

[00:32:43] David Grusky: Yeah, so I should say, you know, there are people at, you know, our colleagues at Stanford, like Michael Bernstein, Diyi Yang, uh, and Park and Willer, and lots of folks here who are taking on that question are making extraordinary headway on it. But there's early evidence suggesting that if you have a long form qualitative transcript, not just survey data, but if you give the LLM really, really high quality, deep information about a person and you ask them then questions after they ingest that transcript, uh, their answers will be very similar to the answers to the actual person who generated that transcript. Uh, so this is the, the, you have the result coming out of Bernstein's lab, but there are other folks involved in this too. Uh, and it's very, it suggests there may be a very, very low-cost way of getting high quality evidence.

[00:33:31] Russ Altman: And then of course just to complete the circle with good data from these kinds of experiments, we might generate the rock-solid evidence that gets, uh, institutions and people to move, uh, to create a more equal society.

[00:33:44] David Grusky: I couldn't have said it better myself. That's absolutely right. And it's so important. Makes me so excited because we have an opportunity now, the likes of which we've never before seen. 

[00:33:53] Russ Altman: Thanks to Dave Grusky, that was the future of inequality. Thank you for listening. Don't forget, we have lots of back episodes in our catalog and you can spend hours listening to The Future of Everything. Also, please remember to tell friends, family, and colleagues about the show and remember to follow it on whatever app you're listening to. You can connect with me on many social media @RBAltman or at RussBAltman at Threads, Mastodon, Blue Sky. You can also follow Stanford Engineering @StanfordSchoolOfEngineering, or @StanfordENG.