Skip to main content Skip to secondary navigation
Main content start

The future of AI at work

An expert in organizational behavior looks at the growing impact of artificial intelligence in the workplace and finds a nuanced dynamic that’s still playing out.
Robot and human hands connecting puzzle pieces.
When it comes to using AI at work, it’s about more than improving productivity, it’s about job enrichment. | Olga777 / Shutterstock

Arvind Karunakaran studies the intersections of work, AI, and organizational behavior. 

He says AI can enhance speed and productivity in the short run, yet degrade skills over time. But it is in organizational power dynamics where AI has had its most marked impact, he says, telling host Russ Altman about situations in law firms where AI has fostered tension between paralegals and junior attorneys. It’s AI and the modern workplace on this episode of Stanford Engineering’s The Future of Everything.

Embed Code

Listen on your favorite podcast platform:

Transcript

[00:00:00] Arvind Karunakaran: When we think about AI and its impact in the workplace, like there's a lot of focus on productivity in terms of, could we do a task quicker? A lot of measures are oriented towards that, right? The same type of task, we do it faster, quickly, more efficiently. But what we know from prior history of technology-based automation, going all the way back to industrial automation, is that in that process, people also lose skills, right?

[00:00:27] They're over dependent on the technology or they don't learn new skills in the process. And over time, that has like an interesting sort of impact for the workers, for the organization too.

[00:00:44] Russ Altman: This is Stanford Engineering's The Future of Everything, and I'm your host, Russ Altman. If you're enjoying the podcast, please consider sharing it with your friends, relatives, and neighbors. Personal recommendations are the best way to spread news about the podcast and help our audience grow. 

[00:00:59] Today, Arvind Karunakaran from Stanford University will help us understand how AI is affecting the workplace. He'll tell us that there's a difference between skills and productivity. And while AI in the short term may make you more productive, you need to make sure that you also learn skills for the long haul. It's the future of AI at work. Before we get started, a reminder that if you're enjoying the podcast, please tell your friends and family about it. Spread the word about The Future of Everything.

[00:01:38] AI has hit the world like a ton of bricks and many of us are trying to figure out how to use it best in our professional life. This is tricky because it can affect our skill levels, it can affect how people view our productivity. It may shift power dynamics at work in terms of who can do what. There's issues of accountability. Even if the AI generates the answer, do you take responsibility for any errors that the AI may have generated? 

[00:02:06] Not to mention that there's the issue of trust. Can I trust that my employer or my employees are using AI in ways that are responsible and that advance the mission of the organization in responsible ways?

[00:02:18] Well, Arvind Karunakaran from Stanford University is a professor of management science and engineering and an expert on how work, technology, and organizational dynamics interact. He's going to tell us about how AI is being tried out in many different settings. And some of the initial lessons we're learning and a few hints about how to implement AI in your workplace to make it most effective and least problematic.

[00:02:47] So Arvind, you study the impact of AI on work, among many other things. And one of the things that you've argued is that we need to be careful about the difference between productivity gains and skill gains in the workplace. That sounds interesting to me. Can you expand on that? 

[00:03:03] Arvind Karunakaran: Sure. Thank you, Russ. Thanks again for inviting me to the show. Uh, so this is building on my own work, but also others like research in this area. For instance, Matthew Beane, he has a new book out called The Skill Code where he makes like similar arguments. The basic idea here is that when we think about AI and its impact in the workplace, like there's a lot of focus on productivity in terms of, could we do a task quicker? A lot of measures are oriented towards that, right? The same type of task, could we do it faster, quickly, more efficiently?

[00:03:36] Uh, but what we know from prior history of technology-based automation, going all the way back to industrial automation, is that in that process, people also lose skills, uh, right? They're over dependent on the technology or they don't learn new skills in the process. And over time that has like an interesting sort of impact for the workers, for the organization too.

[00:03:59] One, there might be a short-term productivity bump, but, uh, if there is no, not much, like, skill gain or sometimes even skill loss experienced by the workers. Either due to their over dependence on technology or they're not learning new skills. That has an impact on their own career to their own job. But also to the organization, too, because imagine there's a next generation of workers going to come. Who's going to train them? There are a lot of other subtle facets of knowledge involved. 

[00:04:26] So that's why I was trying to argue that we have to disentangle the two productivity from skills. And think about are people learning new skills or at least not losing existing skills they've acquired over the long time.

[00:04:39] Russ Altman: So it's fascinating to me that this is a well-known thing that you've seen historically before with all the other technological advances and that makes perfect sense. 

[00:04:48] So let me ask, um, uh, presumably there's a new set of skills. That these a, that these gener, that these next generation of workers have to adopt in order to most effectively compete.

[00:05:01] Arvind Karunakaran: Yeah. 

[00:05:01] Russ Altman: In the use of the AI tools, are we starting to see what that skillset looks like? Or is it still way too early? 

[00:05:07] Arvind Karunakaran: Yeah, I think it is still a little bit too early. Uh, a couple of things again, uh, if you look at like a significant economic impact, uh, what, uh, that created it, it's not because we are able to do the same things faster or more efficiently. It's because we were able to do brand new things, brand new tasks, activities, and sometimes even new jobs. Like think about programming, the emergence of computers and programming as a new type of activity. Like that, we see some evidence that yes, prompting is a new skill, trying to sort of combine different sources of knowledge is like augmenting AI with human expertise is a new skill.

[00:05:43] But it is still quite early in the process on how we should go about doing this. Companies that I've studied are still struggling. They're asking people to just watch videos on how to use a ChatGPT or any of the AI technologies. That's a pretty, like, not a good way to train people on learning new skills. So we are.

[00:06:02] Russ Altman: Yes. 

[00:06:02] Arvind Karunakaran: So the, it's still quite early, we are seeing some evidence of new skills, but how do we go about inculcating them and formalizing them into the organizational workflow? That's still like a, that's a long road ahead, essentially. 

[00:06:17] Russ Altman: Okay. So that's great. So now stepping a little bit back to bigger ideas, you know, when I review your, uh, your, uh, your work, which is what I always do before I have a guest on. I'm struck that you have done a lot of work on authority, and power dynamics between employer, employee, and how this impacts the workplace.

[00:06:36] And I'm sure everybody wants their workplace to be optimized with these things. It has to do with trust. There's a lot of issues and all of these words come up in your work. 

[00:06:43] Arvind Karunakaran: Yes. 

[00:06:44] Russ Altman: What is your assessment of how AI is playing into these dynamics? 

[00:06:49] Arvind Karunakaran: Absolutely. I think a lot of those old theories, like we see that even more powerfully play out. Uh, if you could give a very quick example of one kind of research project that we are doing. We are looking at a law firm, a corporate law firm trying to implement Gen AI. Uh, vertical specific, uh, language model for the legal industry, uh, targeted towards the paralegals. Like, so, they, their job is to do a lot of contracts and NDAs for their clients.

[00:07:17] Russ Altman: Right. 

[00:07:17] Arvind Karunakaran: So, this makes that job much more easier, faster. And the idea was that if they're able to do that faster, perhaps, uh, the paralegals could do more complex tasks, uh, right? They could help out in legal research, in client meetings, and so forth. That was the broader objective. 

[00:07:34] Russ Altman: Yes. 

[00:07:34] Arvind Karunakaran: Uh, but what we found was, of course, some of the paralegals were able to learn the tool, uh, this language model and use it effectively. But, uh, the minute they try to do more complex work, such as doing legal research or being part of the client meetings, the junior attorneys in the company, these are, think of them as your second year or third year associates, 

[00:07:57] Russ Altman: Just out of law school. 

[00:07:58] Arvind Karunakaran: Yeah, just out of law school or couple of years out, right? They're like, why are the paralegals here? Uh, they are stepping into our turf, basically. We will do all the legal research, we are the ones who needs to interact with it. But they don't even want them in the room. Uh, although technically the paralegals are now trained and capable of doing those advanced tasks, right?

[00:08:18] So the older theories about jurisdictional struggles of power is playing out like even more prominently today. And I feel this is all the more important because the way AI would have an impact beyond short term productivity is people are able to do more complex work, right? 

[00:08:36] Russ Altman: Yes. 

[00:08:37] Arvind Karunakaran: But if there is pushback and if there are other political power dynamics, that is likely not going to happen or it's going to take a longer time. So we have to be very sensitive to those dynamics too. 

[00:08:48] In our research, we talk about, we should think about redesigning multiple different roles and attune to the power dynamics. The law firm, they initially thought they're only going to rethink paralegal’s role. That all the paralegals now could do legal research and stuff like that, but they didn't think about the attorney, so now they're thinking about how should we redesign the attorney's role so that they could do even more complex tasks than just one. 

[00:09:13] Russ Altman: Yes. 

[00:09:13] Arvind Karunakaran: So I think that's the dynamics that we are observing all the theories. 

[00:09:16] Russ Altman: This is fascinating because what I hear you saying is that there are these ripple effects. And so they thought of the first order effects, so to speak on the paralegals. And in fact, it sounds like a subset of the paralegals adopted this new challenge and were very excited. But then they start stepping on the toes of, and you can, I mean, now I don't know anything about how a law firm works. 

[00:09:37] But now I can imagine that these second and third year associates start doing things that they're mid-tier or high tier partners do. And all of a sudden the entire organization is threatened with respect to traditional roles. 

[00:09:50] Arvind Karunakaran: Absolutely. 

[00:09:51] Russ Altman: How well are we prepared for this? So you study this, uh, is there a playbook that the heads of the, uh, law firm can say, okay, we're now dealing with problem number seven. This is in chapter six. Or are we now like, oh my goodness, this is a whole new world and we have to. Because I, it's it, the reason I ask this is I could imagine that they understand the idea that things could be disrupted. But that's very different from understanding exactly how they should reorganize the company to anticipate these disruptions.

[00:10:21] Arvind Karunakaran: Absolutely. I think again, it's quite early stages. The companies are in the process of figuring it out. Uh, so they are looking at it more a piecemeal manner or this particular technology might impact this particular role and not at a very systemic level. So therefore, yeah, a complete sort of systemic change in terms of how roles are designed. And how even the organizational structure, the reporting structure, 

[00:10:46] Russ Altman: Right. 

[00:10:46] Arvind Karunakaran: Uh, it needs to be rethought. So there is no playbook. Uh, hopefully if you ask me in a couple of years, like we are studying multiple companies. Trying to think about what are the new types of training and redesign processes. Yeah. 

[00:11:02] Russ Altman: So, um, another word that comes up in your work a lot is accountability. 

[00:11:06] Arvind Karunakaran: Yeah. 

[00:11:07] Russ Altman: Uh, and we all, we're worried about this in many senses. And I'm not even sure I fully understand what accountability means to you. I know part of it has to do with, you know, just a professional work product and professionalism. Uh, and of course, um, there's the, even in the example that you just gave, there's the accountability of the paralegals to do a good job.

[00:11:28] You know, you could tell me that, for example, the paralegals do something but the second and third year associates need to have a legal eye in evaluating what the paralegals work and it could change, you know, what their job is. So anyway, tell me what accountability is in general to make sure I have it right. And then how all of this might be changing accountability dynamics? 

[00:11:50] Arvind Karunakaran: Absolutely. Uh, so accountability has become unfortunately a buzzword. It's like often used a lot. Uh, the way I understand it is that like as an individually, be it you are an employee or a manager, there are certain stakeholders that, uh, are your audiences and you try to basically be sensitive to their concerns. And producer productivity, you take some level of ownership, but not just, you know, try to cover your back. 

[00:12:16] Russ Altman: Right. 

[00:12:16] Arvind Karunakaran: And, uh, so that's what we mean by accountability. Some system of checks and balances that ensures that, you know, people do a good job and don't get away easily. 

[00:12:28] Russ Altman: Yes. 

[00:12:28] Arvind Karunakaran: If they basically do something terrible, right? So in the context of AI and algorithmic decision making more broadly, we see that play out again quite a bit. People on the one hand are a little bit aversive to use some of the decisions that these algorithms, uh, recommend them. Uh, but once they overcome that initial leap of, initial sort of trust, uh, they, if things go wrong, they can always, they'll, I've seen many instances where they point out that it is actually the technology or the vendor who developed the technology and not them. So how do we, the, even the locus of accountability of who's responsible if something goes wrong has become very murkier. 

[00:13:11] Uh, yeah, so two things happen here just to summarize. One, people don't want to take ownership of their decisions. So they just pause the box saying, oh, I didn't want to make this decision because I'm worried that if something goes wrong I would be blamed. Or two, uh, the quality that they ideally like to take ownership about the quality of their products. Uh, that is no longer there because they're not even sure whatever they're producing is just their output or an output of a complex or some large of tools and technologies, right? 

[00:13:44] Russ Altman: Yes, and I think that that is what managers are very worried about. Is this issue of blaming, like I used AI and if I get away with it, and if you think my work product is good, then everything is fine. But if my, if you don't think my work product is good, I'm going to blame the AI. Instead of kind of taking responsibility for the fact that I made the decision to use the AI in the first place. And, you know, you and I, as educators know that the same exact thing has arises with student work. 

[00:14:14] If a student has used a generative AI for their assignment and they get an A, then everything is fine. If I had a case where somebody clearly used ChatGPT for a writing assignment. And in the references, there were papers by John Doe and Alice Smith. And they came to me and they said, why did I get such a bad grade? I said, did you even look at your bibliography? 

[00:14:37] Arvind Karunakaran: Yeah. 

[00:14:38] Russ Altman: You clearly didn't do any of this research and, uh, they did declare that they had used ChatGPT so that I didn't have to report them for cheating under the rubric of the rules that we had established in the class. But this professionalism and taking responsibility is I think what is getting, um, managers everywhere very worried because you need to count on your employees. And there's trust, another word that you write about a lot. And this accountability, I think it really goes to the core of professionalism. 

[00:15:07] Now, I want to do a little pivot, but I don't think it's going to be a huge pivot. Because I know that a lot of your work is on platforms. 

[00:15:13] Arvind Karunakaran: Yes.

[00:15:14] Russ Altman: Uh, and they have disrupted a lot of the workplace. And I know they're also impacted by this AI. But let's start out the conversation about platforms by just defining what you mean by platforms in your work and why you find them so important to study. 

[00:15:28] Arvind Karunakaran: Absolutely. So by platforms, I use the definition that economists use. There is a computer science version of the definition too. But economists think about platforms as some form of intermediaries that connect two or more sets of actors. So they view like an Apple iOS as a platform because it connects the third party app developers with consumers such as you and I. 

[00:15:51] Russ Altman: Okay. And Uber, would Uber be a platform? 

[00:15:52] Arvind Karunakaran: Uber would be a platform. So you can think of platform two ways. One, it does a matching function, matches two sets of actors, two or more. Second, it allows people to extend the capabilities like the Apple iOS or Android. 

[00:16:04] You can use some basic functionalities and extend them. Uh, so those are the two. I've studied both these types of platforms. Uh, the specific focus on, uh, cloud computing platforms and gig work platforms, uh, such as Upwork and companies such as that.

[00:16:20] Russ Altman: And so what do you find fascinating about them? Why are they worthy of your study? 

[00:16:23] Arvind Karunakaran: I mean, it's, uh, economically, it's neither a firm nor a marketplace, right? It is sort of a because there's some kind of like an interesting hybrid. They have a marketplace that they manage, but not in a way a typical market operates, right? It's a very managed, it's almost like a walled garden, uh, right?

[00:16:42] So that's interesting, which means that all the issues that we talked about, about power and accountability and trust plays out very prominently in those settings. Two is a lot more people are reliant on platforms. I mean, I just took a note of how many platforms I use in a day for my everyday work and life. There's quite a few actually and therefore there's not so much attention placed in social science, economics, sociology, research on how do we think about understanding, managing these platforms? As opposed to there's a lot of work on firms, right? How do we incorporate accountability, for instance, is a big topic.

[00:17:18] Russ Altman: Yes.

[00:17:18] Arvind Karunakaran: So that's what fascinated me to understand what works, what doesn't. How could we make these platforms better? Not just for the platform owner, but for all the actors involved. For the gig workers, for the app developers, and so forth. 

[00:17:31] Russ Altman: The platforms seem, again, I'm not an expert, but they seem to concentrate a lot of power in the people who are building the platform. They can have amazing multipliers where they make an investment in the platform. But they then get access to a huge amount of work force. And I'm guessing that this is interesting to you because of the authority, they wield lots of power. I think most Uber drivers would say that the Uber algorithm is incredibly important to their life, literally, in terms of like earnings power. The app developers are acutely aware of any change to the app store and its rules. So, do you find these things problematic or is it just something that needs to be managed, but it's not fundamentally a problem? 

[00:18:18] Arvind Karunakaran: I mean, it's a complicated answer, obviously. My sense is, I mean, the counterfactual is, what would the gig workers do without these platforms, right? 

[00:18:28] Russ Altman: Yeah.

[00:18:28] Arvind Karunakaran: So in some sense, like, it is obviously providing them an opportunity. And I've studied a lot of freelancers working in India and other places. It provides huge economic opportunities for mobility. So that's definitely there. Uh, but, uh, is it sort of sustainable in some sense that, uh, if an algorithmic change happens overnight, I've seen instances where people, uh, where they show up like really change. Like in the context of restaurants or, 

[00:18:58] Russ Altman: Yes. It can be devastating. 

[00:18:59] Arvind Karunakaran: It can be devastating. It can be almost existential for them if it's showing you the page one versus page four. And that happens for no, at least to them, no fault of theirs. That's one. Second is there's so much precarity and uncertainty that these, uh, workers of these platforms experience, right? So could we make their life a little bit easier and predictable at least? Uh, so those are some of the things I'm interested in.

[00:19:24] Russ Altman: This is The Future of Everything with Russ Altman, more with Arvind Karunakaran next.

[00:19:43] Welcome back to The Future of Everything. I'm Russ Altman. I'm speaking with Arvind Karunakaran from Stanford University. 

[00:19:48] In the last segment, we explored how AI is being used across many organizations and some of the issues of trust, accountability, authority and power dynamics are affected in the workplace by the introduction of AI. It's very different from other technologies that have turned the world upside down in the past. 

[00:20:08] In this segment, Arvind will tell us about some of the initial observations about experimentation within the workplace with AI and how to best encourage it for employees. He'll also give managers some kind of tips about how they can think about AI and make sure that it's used productively in the workplace.

[00:20:28] You're actually studying how people are using AI in the workplace now and the kinds of experiments they're doing to learn how AI can help them in their mission. Tell us what you've learned. 

[00:20:39] Arvind Karunakaran: Absolutely. Uh, to give a little bit more additional context on, uh, according to me, one thing that makes Gen AI quite unique apart from other types of technologies is that it does not have, relatively speaking, any prebuilt features or functionalities per se. If you think of a CRM or an ERP system or a medical decision support system, it has some features, functionalities, you could train the users on what those are and let them use it, build communities of practice and stuff like that. 

[00:21:09] But with Gen AI, there are no features, functionalities. They get discovered in use as people use it more embedded as a part of their everyday workflow. Only then they come to realize, oh, I could use it for this purpose, for this task, that task. But they will not experiment if they first, they will not even try out using these technologies, uh, if they don't trust it. 

[00:21:32] The trust is not just on the technology alone, but rather on the messaging that the management gives and what they're trying to use AI for. So what I've, uh, our team has observed studying the implementation of AI, where it goes really well, where people are using, experimenting widely as opposed to those who are not even logging on to it. What we basically found again boils down to the previous conversation we had Russ on, uh, how is it positioned? 

[00:22:02] If it is positioned just as a productivity or efficiency improvement tool, which a lot of organizations, managers position it as. Use the AI, Gen AI to do your work faster. You would think that people would be impressed by it or try to want to try it out. But actually it creates some kind of a psychological barrier. 

[00:22:22] People ask, uh, okay, what if I am more productive today? What if we as a group of paralegals, for instance, are more productive today. What would happen to us as a group like two years, five years from now? Are we going to be less powerful? Are we even going to get consolidated or even replaced, right? 

[00:22:41] Russ Altman: Right. 

[00:22:42] Arvind Karunakaran: So therefore the messaging matters. What we found was a pure productivity enhancement messaging oftentimes does not cut it, uh, in, you know, sort of encouraging people to experiment and use the technology. Rather, what we found was some kind of a job enrichment type of framing or positioning, right? 

[00:23:00] Where you could use Gen AI to not just do your boring tasks, more faster. But also interesting, more complex tasks that you would be able to do to create sort of a space room for. That in turn, we've seen instances across multiple contexts, the law firm context we talked about, but also in ad agencies. People are a lot more willing to experiment if you frame it almost as a job enrichment, right? 

[00:23:26] Russ Altman: Yeah, that really does make sense. 

[00:23:28] Arvind Karunakaran: Yes, yes. 

[00:23:29] Russ Altman: And, uh, and people, so the, and, uh, you've said the word messaging a couple of times and it seems incredibly important because the management needs to make it clear that you will not be penalized for playing around with the AI. Because it's going to look like you're playing around. That's what all research initially looks like until you kind of make it more formal and they say. Well, you could have been like working on that brief, but instead you were playing around. And so they really need to say, we will embrace your playing around and we'd love to know what you learn, blah, blah, blah. And are people starting to do this in an organized way? 

[00:24:04] Arvind Karunakaran: Absolutely. I mean, not so organized, uh, but it's still like the common findings are emerging. So perhaps, uh, in the future. So, but, uh, I've observed instances where companies are not only allowing people to experiment. But also formally assigning time where people could actually learn this new technology, play around.

[00:24:24] And, uh, so as a part of their everyday time reporting, uh, that helps a lot. Some form of continued commitment from management on training and re skilling investments that matters a lot on what actually motivates people to use this technology more. 

[00:24:38] Russ Altman: Yes. 

[00:24:39] Arvind Karunakaran: Uh, in part because it's also very easy to lose trust, uh, given we know the issues with hallucination. So in the same law firm and one of the divisions where the managers position the technology just as a productivity improvement tool. The very first instance, like the, this technology throws up a wrong contract clause, for instance. People say, oh, see, you know, this is like not, 

[00:25:02] Russ Altman: And it's one strike and you're out.

[00:25:04] Arvind Karunakaran: One strike and you're out or two strikes, you're not interested. 

[00:25:07] Russ Altman: Yeah. 

[00:25:07] Arvind Karunakaran: Whereas the other division where the managers framed it more as a job enrichment tool, do it to do more complex tasks. They were willing to sort of understand why it hallucinates, where it works. There was even some kind of a common knowledge sharing among the paralegals. So that matters a lot on how do you motivate people to use it to experiment, in turn, uh, realize the value of these technologies.

[00:25:30] Russ Altman: The points you're making are so salient. Just to mention, I'm on a committee where we're evaluating the use of AI at the university by our colleagues on the faculty. And we interviewed many, and I was shocked at the number of colleagues who have never tried ChatGPT or something like it and have shown no interest in that.

[00:25:49] And, you know, of course it's fine. These are very distinguished colleagues. I would never tell them what to do. But I was stunned at the, at least in a few cases, just a kind of apparent lack of curiosity. But maybe it was more than that. It might've been more of these trust issues that you're pointing to. And they've actually done a calculation because none of them. None of them are idiots. And so they might have said, you know what, I'm going to stay away from that technology now until I and others understand it better. And that leads me to my last set of questions, which is, I'm sure you have managers coming to you saying, what should I do about AI in my workplace?

[00:26:25] I want to be a good manager. Um, you just outlined one of the things they might do is this idea of actually giving permission and time. Uh, but what is the general approach that somebody on the ground just trying to get their job done and create a productive and happy workplace? What's the advice about how to think about AI?

[00:26:45] Arvind Karunakaran: Yeah, again, uh, uh, I don't want to sort of give recommendations like in a premature manner. But what we have observed so far, things that as patchwork, bits and pieces, it works. One is this explicit, again, messaging about, it's not just a productivity improvement tool. It is a tool to sort of, you know, almost do more complex work and enrich your job. Oftentimes that, uh, that has shown positive impact. 

[00:27:12] Second, uh, rethink trading in very fundamental ways, right? Corporate training, workforce development, and skill development programs. Right now, I mean, that is one sector across, uh, multiple industries where it's very limited, right? Typically it is like you watch a video or there's some informal ad hoc group.

[00:27:33] You could do a lot more. So the other thing that we have done is doing some experiments internally within companies is, could we use like a task specific, uh, chatbot? Say if there's someone in the supply chain who does like negotiations with vendors and they want to, there's a lot of use cases for Gen AI.

[00:27:53] You develop a negotiation bot, the task that they do almost every day, and use it to teach them how to use Gen AI, as opposed to like a generic training video, right? That has shown to, uh, like tremendous improvement in people's willingness to use it, trust it more, experiment, understand what it is good for, not good for at this point.

[00:28:12] Russ Altman: I'm so happy to hear you say that, because when I have talked to these colleagues who don't want to, who have never interacted with ChatGPT, uh, or anybody else, my advice is, ask ChatGPT things about an area that you know extremely well. Don't ask it to come up with a shopping list. Don't come in, you know, don't help. Don't ask it to help plan your vacation to Paris, although I did that and it worked great. But, you know, take the area of your particular expertise and ask it a hard question or a hard task because that's when you can immediately get them in, because this is what they care about the most. And then good or bad, they will be, um, I think they will be engaged by the answer. If the answer is much better than they're expecting, then, you know, you see their eyebrows go up and then all of a sudden they're interested in seeing how far they can go. 

[00:29:02] Arvind Karunakaran: Yeah. 

[00:29:02] Russ Altman: And if the answer is terrible that actually gives them a little sense of where their expertise is actually in some ways, non-obvious. And AI has not yet figured out what they know. So that's super, uh, interesting. Uh, and, um, and very helpful. And what about the platform? So we talked about platforms at the end of the last. Is there any kind of special advice about things to be aware of in these platform relationships? The reason I mentioned is as you pointed out, there's this kind of mandatory, usually, but not always, software layer that's separating the, uh, employer from the, well, the contractors or the gig workers.

[00:29:43] How should they, either the gig workers themselves or the employers, how should they think about the AI opportunities, given that they have this barrier? 

[00:29:51] Arvind Karunakaran: Oh yeah, uh, so there's like a lot of fascinating issues emerging there too. Uh, in part the one issue that, uh, especially in the context of platforms and gig work platforms, uh, specifically. Is that, uh, people are still trying to figure out what are the acceptable versus unacceptable uses of AI from a client's perspective. 

[00:30:12] So, uh, one of the platforms, for instance, uh, that does a lot of like gig work for freelancing, like graphic design. Some of the clients we're really unhappy that one of the gig workers using the platform tried to use like a Gen AI technology to generate the outputs. 

[00:30:32] But others are not, uh, right? So that is something, uh, so what, how do we think about what are the appropriate or not appropriate? And that changes over time too. So that's a pretty big, uh, issue. 

[00:30:44] The second is, uh, a lot of the freelancers, the gig workers themselves are using Gen AI. And, uh, the company is trying to, it's a little bit hard to figure out how much time it actually takes for them 

[00:30:57] Russ Altman: Ah, for payment. 

[00:30:59] Arvind Karunakaran: For payment, right? And this is not just an issue with platforms and gig work, across both two. One of the ad agencies, uh, we are studying, the graphic designers could do things much more faster, right? But there is no really incentive for them to say, actually report how long it took to generate a particular banner or design using an AI. So you have to, again, read things, more structural things on incentives and how do you make sure.

[00:31:27] Russ Altman: Oh, I love that because now the fact that they have this very lucrative barrier of software is allowing things to happen on the other side of the software that they can't easily track. And so trust, power, it all shifts in very interesting ways. Well, I'm sure, 

[00:31:43] Arvind Karunakaran: Common, sorry, uh, some sort of finding a common ground, right? Like a lot of it now, the interests are all skewed towards the platform owners or, uh, or the, even the managers for that matter. So where could we find the meeting point where there's something for the workers or the employees, they get something out of it, like learn new skills. 

[00:32:03] Russ Altman: Yes, yes. 

[00:32:04] Arvind Karunakaran: Or more opportunities and the organization to get something out of it. That is where I think there is currently there's lack of thinking, lack of more research. 

[00:32:13] Russ Altman: Yes, this is great. This is the idea of AI as a democratizer in some way, which is not how you usually think about it. 

[00:32:20] Thanks for tuning into this episode. You know, we have more than 250 back catalogue episodes that are available to you on a moment's notice to listen to twenty-four to thirty minute conversations about a wide variety of topics.

[00:32:35] If you're enjoying the show or if it's helped you in any way, please consider rating and reviewing. We love to get those fives, but we'll take whatever's the truth. You can connect with me on X or Twitter @RBAltman, and you can connect with Stanford Engineering @StanfordENG.