Skip to main content Skip to secondary navigation
Main content start

The future of robotics

An expert on robotics says that the recent revolution in large language models is metaphorically – and, in some cases, literally – opening new doors in her field.
Blue outline of two robotic arms.
Are we making progress in building robots that can help with everyday tasks? | iStock/blacklight_trace

Guest Jeannette Bohg is an expert in robotics who says there is transformation happening in her field brought on by recent advances in large language models.

The LLMs have a certain common sense baked in and robots are using it to plan and to reason as never before. But they still lack low-level sensorimotor control – like the fine skill it takes to turn a doorknob. New models that do for robotic control what LLMs did for language could soon make such skills a reality, Bohg tells host Russ Altman on this episode of Stanford Engineering’s The Future of Everything podcast.

Listen on your favorite podcast platform:  


[00:00:00] Jeannette Bohg: Through things like ChatGPT, we have been able to do reasoning and planning on the high level, meaning kind of on the level of symbols, very well known in robotics, in a very different way that we could do before.

[00:00:17] Russ Altman: This is Stanford Engineering's The Future of Everything, and I'm your host, Russ Altman. If you enjoy The Future of Everything, please hit follow in whatever app you're listening to right now. This will guarantee that you never miss an episode. 

[00:00:29] Today, Professor Jeannette Bohg will tell us about robots and the status of robotic work. She'll tell us that ChatGPT is even useful for robots. And that there are huge challenges in getting reliable hardware so we can realize all of our robotic dreams. It's the future of robotics. 

[00:00:48] Before we get started, please remember to follow the show and ensure that you'll get alerted to all the new episodes so you'll never miss the future, and I love saying this, of anything.

[00:01:04] Many of us have been thinking about robots since we were little kids. When are we going to get those robots that can make our dinner, clean our house, drive us around, make life really easy? Well, it turns out that there's still some challenges and they're significant for getting robots to work. There are hardware challenges.

[00:01:20] It turns out that the human hand is way better than most robotic manipulators. In addition, robots break. They work in some situations like factories, but those are dangerous robots. They just go right through whatever's in front of them. 

[00:01:34] Well, Jeannette Bohg is a computer scientist at Stanford University and an expert on robotics. She's going to tell us that we are making good progress in building reliable hardware and in developing algorithms to help make robots do their thing. What's perhaps most surprising is even ChatGPT is helping the robotics community, even though it just does chats. 

[00:01:58] So Jeannette, there's been an increased awareness of AI in the last year, especially because of things like ChatGPT and what they call these large language models. But you work in robotics, you're building robots that sense and move around. Is that AI revolution for like chat, is that affecting your world? 

[00:02:15] Jeannette Bohg: Yeah. Um, yeah, very good question. It definitely does. Um, in, um, surprising ways, honestly. So I think for me, my research language has always been very interesting, but somewhat in the, you know, in the background from the kind of research I'm doing, which is like on robotic manipulation. And with the, um, with this rise of things like ChatGPT or large language models, suddenly, um, doors are being opened, uh, in robotics that were really pretty closed.

[00:02:46] Russ Altman: Metaphorical or physical or both? 

[00:02:48] Jeannette Bohg: Physically. That's exactly, that's a very good question because physically robots are very bad at open doors, but metaphorically speaking, these, uh, we can talk about that as well, metaphorically speaking through things like ChatGPT, we have been able to do reasoning and planning on the high level. Meaning kind of on the level of symbols, very well known in robotics in a very different way that we could do before. 

[00:03:12] So let's say, for example, you're in a kitchen and you want to make dinner. Um, and, uh, you know, there are so many steps that you have to do for that, right? And they don't have to do something, they don't necessarily have to do something with how you move your hands and all of this.

[00:03:27] It's really just like, I'm laying out the steps of what I'm going to do for making dinner. And this kind of reasoning is suddenly possible in a much more open-ended way, right? Because we can, uh, these language models, they have this common-sense knowledge kind of in baked in them. And now we can use them in robotics to do these task plans, right? That, um, that are really consisting of so many steps and they kind of make sense. It's not always correct. 

[00:03:55] Russ Altman: Right, right. 

[00:03:55] Jeannette Bohg: Um, I mean, if you try ChatGPT, you know, it's hallucinating thing. It's like making stuff up. Um, but, um, that's the challenge, uh, actually, and how to use these models in robotics. But the good thing is they open up these doors, metaphorically speaking again, um, to just do this task planning in an open-ended way. Um, and you know, and they can just like, um, they also allow to have this very natural interface between people and robots as well. That's another, 

[00:04:26] Russ Altman: Great. So that's really fascinating. So. If I understood your answer, you said that like for a high level, here's kind of the high-level script of how to make dinner, you know, get the dishes, get the ingredients. Um, do you find that there's a level of detail, I think implied in your answer, is that there's a level of detail that you need to get the robot to do the right things that it's not yet able to specify. 

[00:04:49] Are you optimistic that it will be able to do that? Or do you think it's going to be an entirely different approach to like, you know, move the manipulator arm to this position and grasp it gently? Do you think that that's going to be in the range of ChatGPT or will that be other algorithms? 

[00:05:03] Jeannette Bohg: Yeah. So I think to some extent, again, like these, you know, common sense, um, understanding of the world is in there. So for example, the idea that a glass could be fragile and you have to pick it up in a gentle way, or, uh, let's say you have to grasp a hammer by the handle or, you know, the tool tip of, uh, that tool is like over here or something like this.

[00:05:26] These are things that actually, um, help a robot to also plan its motion. Not just kind of this high-level task planning, but actually understand where to grasp things and maybe how much pressure to apply. Um, but they still, uh, they still cannot be directly generate an action, right? Like, so the action that a robot needs to compute is basically how do I move my hand? Like where exactly, like every millisecond, uh, or at least every ten milliseconds or something like that. And that is not what these models do. Um, and that's totally fine because to do that, they need completely different, they would need completely different training data that actually has this information in there.

[00:06:09] Um, like the actual motion of the robot arm needs to be given to these models in order to do that kind of prediction. Um, and so I think, um, so yeah, so that is actually the biggest, one of the biggest challenges in robotics to get to the same level of data that you have in areas like natural language processing or computer vision, that these, uh, models like ChatGPT, have consumed so far, right?

[00:06:38] So that, these models have been trained on trillions of tokens, right? Like multiple trillions of tokens. I don't know what the current maximum is. Um, but it's like, yeah, a lot. And in robotics, we have, uh, more like in the order of hundred thousands data of data points, hundred thousands. This is like millions of, um, it's a, by, uh, the difference is a factor of millions.

[00:07:06] Russ Altman: Now let me just ask you about that because I'm surprised you say that because I think about in many cases robots are trying to do things that we see in video by humans all the time. Like probably on television you could find many, many examples of somebody picking up a glass or opening a door, but it sounds to me like that's not enough for you. Like, in other words, these pictures of people doing things that doesn't turn into useful training data for the robot. And I guess that kind of makes sense. Although I'm a little surprised that we haven't figured out a way to take advantage of all of that human action to inform the video action. So talk to me a little bit about that. 

[00:07:43] Jeannette Bohg: Yeah, yeah. This is like a very interesting question. So the data that I said is too little right now, uh, in comparison to natural language processing and computer vision, that's really data that has been directly collected on the robot. 

[00:07:54] Russ Altman: Okay. So it's robot examples of them reaching, them touching.

[00:07:58] Jeannette Bohg: Yeah. And so that's like painstakingly collected with like joysticks and stuff like this, right? Like it's very tedious. That's why it's, I don't think possible to get to the same level of data, but you bring up a very good point, right? Like on YouTube. I mean, I'm watching YouTube all the time to just figure out like how to do something right?

[00:08:16] And how to repair something or do this and that, and yeah, we are learning from that and we are learning when we are growing up from our parents or whoever is like showing us how to do things. And, um, we want robots to do exactly the same. Uh, and that is like a super interesting research question. Uh, but the reason why it's a research question and not solved, um, is that in a video, um, you see a hand of a person, for example. But this hand, like our hand, sorry, I actually cut myself. 

[00:08:46] Russ Altman: Yes, I see that. For those who are listening, there's a little Band-Aid on Jeannette's hand. 

[00:08:51] Jeannette Bohg: But our hand is actually amazing, right? Like we have these five fingers, we have like, I don't know, it's even difficult to actually count how many degrees of freedom and joints our hand has, but it's like twenty-seven or something like that. It's soft, it can be very stiff, but it can also be very compliant. It's like, an amazing universal tool. And our robot hands are completely different. Unfortunately, I don't have one here, but basically, it's like, like a gripper. Very easy, very, um, very simple. Um, and it's because of that, it's very limited in what it can do. Um, and it might also need to do, um, things that a person does or tasks that a person does in a completely different way. 

[00:09:30] Russ Altman: I see, I see. 

[00:09:31] Jeannette Bohg: To, um, you know, to achieve the same task if it's even possible at all. And so if a robot looks at a video of a person, it needs to somehow understand like, okay, how does this map, uh, to my, my body right. Like my body only has two. 

[00:09:47] Russ Altman: Yeah, no, that's a really, so it's like, if somebody was watching Vladimir Horowitz play the piano, it's not very useful to show them a YouTube of Vladimir and say, just play it like that because he can do things that we can't do. 

[00:09:59] Jeannette Bohg: Right. That's exactly right. And I've heard that Rachmaninoff, for example, uh, has like these insane, had these insanely big hands and therefore, um, he could play, uh, his pieces. But they had like, uh, you're basically in order to play it, you had like to have a very specific difference between your thumb and your pinky, for example, like the distance, 

[00:10:20] Russ Altman: Span, the span of your, 

[00:10:21] Jeannette Bohg: Yeah. 

[00:10:21] Russ Altman: Okay. So that's a really good answer to my question is that the videos are relevant, but we, they're not dealing with beautiful human hands. And so there would have to be a translation of whatever was happening in the video to their world and it's and that would be difficult. 

[00:10:37] Jeannette Bohg: Yes, that is difficult. But people are looking into this, right? Like that's a super interesting research question on actually how. 

[00:10:43] Russ Altman: And because the positive the upside as we've talked about is that you would then have lots and lots of training data. If you could break that code of how to turn human actions in video into instructions for robot. Okay, that's extremely helpful.

[00:10:57] But I want to get to some of the technical details of your work because it's fascinating, but before we get there, another backup, background question is the goal for the robots. Are we trying to, I know you've written a lot about autonomous robots, but you've also talked about how robots can also work with humans to augment them.

[00:11:16] And I want to ask if those are points on a continuum. Like, it seems like autonomous would be different from augmenting a human, but maybe in your mind they work together. So how should we think about, and what should we expect the first generation or the second generation of robotic assistants to be like?

[00:11:34] Jeannette Bohg: Yeah, this is a very good question. So first of all, I would say, yes, uh, this is like, um, points on a spectrum, right? There are solutions, uh, on a spectrum from, uh, teleoperation, I would say, where you basically puppeteer a robot to do something that's typically done for data collection. Um, or, uh, you know, the, on the other end of the spectrum, you have this fully autonomous, it's basically a humanoid that we see in movies. Right. 

[00:11:59] Russ Altman: That's like the vacuum cleaner in my living room, my Roomba. 

[00:12:02] Jeannette Bohg: Right, right. Exactly. Yeah. That one is definitely autonomous. 

[00:12:05] Russ Altman: It seems fully autonomous to me. I have no idea when it's going to go on or off or where it's going to go. 

[00:12:12] Jeannette Bohg: Yeah. Nobody knows. Nobody knows. 

[00:12:15] Russ Altman: Forgive me. Forgive me. 

[00:12:16] Jeannette Bohg: You bought it. I also had one once, uh, back in the days and you know, I just turn it on and then I left because I knew it would take hours and hours to do what it needed to do. Um, 

[00:12:26] Russ Altman: I'm sorry, that was a little bit of a distraction. But yeah, tell me about the, this, um, spectrum. 

[00:12:31] Jeannette Bohg: Yeah. So I think there are ways in which, um, robots can really augment people in that, uh, they can, for example, um, they, uh, theoretically, they could have more strength, right? Like, so, uh, um, that there are lots of people who, it's not my area, but there are lots of people who built these exoskeletons or prosthetic devices, which I actually also find really interesting. They're typically very lightweight, uh, have an easy interface. Um, so that's interesting, but they can also kind of support people who have to lift heavy things, for example. So I think that's one way on how you can think about augmentation of people to help them. Another one is maybe still autonomous, but it's still augmenting people in a way.

[00:13:15] So one example I want to bring up, this is a shout out to, uh, Andrea Thomaz and Vivian Chu who are like, um, leading this, um, startup called Diligent Robotics and I recently heard a keynote from her at a conference. And I thought they did something really smart, which is they went first into hospitals, uh, to observe what nurses are doing all the, all day, right?

[00:13:37] Like, what are they doing with their hours? And to their surprise, what nurses really spend a lot of time on was just like shuttling around supplies between different places instead of actually taking care of patients, right? Which is what they're like trained to do and really good at, why are we using them to shuttle stuff around?

[00:13:55] And so what they decided is like, oh, we actually don't need a robot to do the patient care or do the stocking or whatever. What we actually need is a robot that just shuttled stuff around in a hospital, uh, where it still needs a hand to actually push elevator buttons and push door buttons and things like that. Or like maybe opening a door again, right? Um, like we had in the beginning. And I thought like, oh, this is such a great augmentation if you want, right? Like that. The nurses can actually now spend time on what they're really good at and what they're needed for and what they're trained for, which is patient care, and just stop worrying about where the supplies are, where things like blood samples or things have to go.

[00:14:36] Russ Altman: And it sounds like it might also create a, I don't know, I'm not going to say anything is easy, but a slightly more straightforward engineering challenge to start. 

[00:14:45] Jeannette Bohg: Right. So I think we're so far away from general purpose robots, right? Like we, I, I don't know how long it's going to take, but it's still going to take a lot of time. And I think a smart way to bring robotics into our everyday world is to actually, uh, ideas like the ones from Diligent Robotics, where you really go and analyze what people quote unquote waste their time on. It's not really a waste of time, of course. But you know, it could be done in like a, in an automated way actually, um, to give people time for the things they're actually really good at and where robots are still very bad at.

[00:15:18] Um, yeah. So I think, um, we will probably see, hopefully see more of this, right? Like in the future, like very small things. You can think of Roomba, for example, doing something kind of very small and I don't know how good it is, like it's good enough, 

[00:15:37] Russ Altman: Compared to ignoring our floors, which was our strategy for the first twenty-five years, this is a huge improvement. Because now, even if it's not a perfect sweep, it's more sweeping than we would do under normal circumstances. 

[00:15:49] Jeannette Bohg: Yeah, I agree with that. So I think like these small ideas, right, like that are not again, like this general purpose robot. But, uh, like some very, uh, smart ideas about where robots can help people with things that they find really annoying, um, and are doable for current robotic technology. I think that's what we will see in the next a few years. Um, and again, like it's a, they are still autonomous again, but they are augmenting people in this way. 

[00:16:16] Russ Altman: Right. That resolves that what I thought was attention, but you just explained why it's not really attention. This is the future of everything with Russ Altman. More with Jeannette Bohg next.

[00:16:41] Welcome back to The Future of Everything. I'm Russ Altman, your host, and I'm speaking with Professor Jeannette Bohg from Stanford University. 

[00:16:47] In the last segment, we went over some of the challenges of autonomous versus augmenting robots. We talked a little bit about the data problems. And in this next segment, we're going to talk about hardware. What are robots going to look like? How are they going to work? How expensive are they going to be? I want to get into kind of a fun topic, which is the hardware. You made a brief mention of the hands, uh, and how amazing human hands are, but the current robotic hands, uh, they're not quite human yet.

[00:17:14] Um, where are we with hardware and what are the challenges and what are some of the exciting new developments? 

[00:17:20] Jeannette Bohg: Yeah. Uh, hardware is hard. It's one thing that I've been told is a saying in Silicon Valley recently. But yeah, uh, I think hardware and robotics is one of the biggest challenges. And I think we have very good hardware when it comes to automation in, uh, places like factories that are, um, you know, building cars and all of this. And it's very reliable, right? And that's what you want. But when it comes to the kinds of platforms that we want to see at some point in our homes or in hospitals, again, um, these platforms have to be equally robust and durable and repeatable and all of this. Uh, but we're not there. We're not there. Like literally, uh, I'm constantly, uh, talking to my students and they're constantly repairing whatever else, whatever new things broken again with our robots. I mean, it's constant. Um,

[00:18:12] Russ Altman: But it's interesting to know, just interrupt you. But the guys at Ford Motor Company and the big industry, they have figured out, is it a question of budget? Is it a question that they just spend a lot of money on these robots or are they too simple compared to what you need? I just want to explore a little bit why those industrial ones are so good. 

[00:18:30] Jeannette Bohg: Yeah, so that is a very good question. I think they are, um, first of all, they are still very expensive, uh, robots actually. So they still cost like, uh, multiple ten thousands of dollars. Um, but yeah, they are also, they have a very, they follow a very specific methodology, which is they have to be, um, very stiff, uh, meaning that not like our arms, uh, which are kind of naturally kind of, um, squishy and give in to any kind of things we may be bumping in. Uh, these robots are not, right? Like they're going to go no matter what, to a specific point you sent them to. And, um, that is just the way they are built. And maybe that's also why they are so robust, uh, as well. Um, but they are dangerous, right? 

[00:19:15] Russ Altman: Yes. 

[00:19:15] Jeannette Bohg: So that's why they're in cages. And, uh, people can't get close to them. Uh, and that's of course not what we want in the real world. So the kinds of robots that we work in the research world with are more geared towards like, oh, when can we bring them into someone's home, uh, or have them at least work alongside a person in warehouses or things like that. Um, and so these, this technology I think is just not quite as mature and as robust. Um, and also not produced in that, at that, um, you know, there are just not so many copies of those as there are of these industrial robots. And I think they're just not as optimized yet. 

[00:19:53] Russ Altman: So when you said the robots cost tens of thousands of dollars, are those the robots you're using? 

[00:19:58] Jeannette Bohg: Uh, yeah. 

[00:19:59] Russ Altman: That your students are fixing all day?

[00:20:01] Jeannette Bohg: Yes, unfortunately, this is exactly right. Like I spent so much money from my lab on, on buying forty thousand dollar robot arms, um, or seventy thousand dollar robot arms, right? Like that's the kind of, uh, money we need to spend to have these research platforms that we need to show our results and test our results. And, um, actually, um, yeah. So for example, um, one of the projects we have, um, is, uh, a mobile manipulator. Uh, so it's, uh, a robot arm on top of a mobile platform. Think of a Roomba with an arm, maybe just like way more expensive. It's more like, 

[00:20:36] Russ Altman: A forty thousand dollar Roomba. I gotcha. 

[00:20:39] Jeannette Bohg: At least. Yeah. So, um, think about that. And that project was really fun. It's like, uh, using this mobile manipulator to clean up your house. So it's, um, uh, it's basically talking to you to figure out like, oh, what are your preferences? Where are your dirty socks going? Where are your, you know, Coke cans, your empty Coke cans going? Um, and then it, uh, kind of from your few examples, compresses that down to like some general categories of where to put stuff.

[00:21:05] And so that's the robot we, uh, we did a project on, and people are very excited about it. They loved it. It's even throwing stuff into bins. It's like a basketball star in a way. Um, and people really love it. And also researcher loves it. Researchers loved it, because there's this mobile base. Uh, so the, basically the, um, you know, the thing on wheels, basically, 

[00:21:29] Russ Altman: Yeah, it can move around. 

[00:21:30] Jeannette Bohg: Um, that one, uh, is very unique. It was a donation from some company. Um, and it's, uh, it has like specific capabilities, but it's like three of a kind exist in the world and, um, we, and people can't buy it and it's very disappointing. So, um, but again, yeah, these are the arms that we are constantly, uh, constantly like repairing and it's like scary even because if we lose this platform, we can't do our research. 

[00:21:58] Russ Altman: Right.

[00:21:58] Jeannette Bohg: So one of the things I'm doing for the first time in my lab, actually, and again, I'm a computer scientist, not a mechanical engineer. But, uh, with one of my students, we're looking at how to develop a low-cost version of this, uh, mobile base that has like these special abilities and is very maneuverable.

[00:22:17] Um, and I'm, my hope is that with this platform first, I hope it's reliable, but if not, at least you can like cheaply repair it, um, and can get in there, right? Like even if you're a student with, who is a computer scientist, not a mechanical engineer, and I hope that it just allows you to buy many of these platforms rather than just one, uh, you know, that you have to baby around all the time, but you can maybe hopefully buy many of them, you will hopefully open source all of this design.

[00:22:47] And then, uh, my, what I'm really excited about is to use this low-cost platform, um, to do maybe swarm based manipulation of many robots, uh, collaborating with each other. 

[00:23:00] Russ Altman: So in your current view, what would be the basic functionality of one of these units or is that flexible? But is it a hand? Is it two hands? Is it, uh, is it mobile like a Roomba? 

[00:23:12] Jeannette Bohg: Yeah, it's basically, uh, um, yeah, you could think of it as a Roomba plus plus basically, which has an arm. So it's not just like, uh, vacuum your floor, but it's actually putting things away. Right? Like if you, uh, for those who have children, right, like I, I think they are always most excited about, about this, what we call TidyBot, um, because it's just like putting things into the right places instead of you stepping on these Lego pieces in the middle of the night, right.

[00:23:39] So that's what you, um, that's what we're going for. Uh, and it would be one mobile base with one arm and one hand. And then let's say you have multiple of them. So, uh, for, you could, for example, think of when you have to move, right? Like I personally think moving to another place is, I mean, it's the worst, right?

[00:23:59] Russ Altman: Packing, packing and unpacking is the worst. 

[00:24:01] Jeannette Bohg: Packing, unpacking, but also like carrying stuff around. So imagine if you have like this fleet of robots, right? That just helps you getting the sofa through like these tight spaces and all of this. So that's kind, 

[00:24:11] Russ Altman: Paint a picture for me in this version one-point-oh, how tall is it? Are we talking two feet tall or five feet tall? How big is it? 

[00:24:19] Jeannette Bohg: Now you're getting me with the feets and the inches. 

[00:24:22] Russ Altman: I'm sorry. You can draw meters, whatever, whatever works. 

[00:24:25] Jeannette Bohg: Okay. Yeah. So actually, uh, so the base is actually fairly, uh, low. Um, and actually pretty heavy so that it has like a low center of mass. It's probably like, I guess a foot tall. Um, I let's say twenty centimeters. 

[00:24:39] Russ Altman: Yeah. 

[00:24:39] Jeannette Bohg: Um, and then the arm, if it's fully stretched out and just pointing up, it is probably like one and a half meters long on top of that. 

[00:24:48] Russ Altman: That's five feet or so. 

[00:24:50] Jeannette Bohg: Really like fully stretched out, which it usually isn't to do stuff. It's like, 

[00:24:54] Russ Altman: But then it could reach things on tables. That's the, that's what I was trying to get to. It could reach tables. It could maybe reach into the dryer or the washing machine or stuff like that. It might be within range. 

[00:25:05] Jeannette Bohg: Uh, all of this then, uh, also just making your bed. Uh,

[00:25:09] Russ Altman: Yeah, I hate that. 

[00:25:11] Jeannette Bohg: Yeah, terrible. 

[00:25:11] Russ Altman: So let me ask, uh, since we're talking about what it looks like. Um, in so much of the sci fi, robots seem to have to look like humans. What's your take on that? Like, is it important that the robot, is it, maybe it's not, maybe it's important that it not look like a human, where are you in this whole humanoid debate? 

[00:25:29] Jeannette Bohg: Okay, this is a very good question. And I'm probably going to say something contentious, uh, or maybe not, I don't know. But yeah, I think building a humanoid robot is really exciting from a research standpoint. Um, and I think it's just looks cool. So it gives you like these super cool demos that you see from all these startups right now, 

[00:25:49] Russ Altman: Right, right.

[00:25:49] Jeannette Bohg: On Twitter and all. I mean, this looks very cool. I just personally, um, don't think that it's like the most, um, like economical way maybe to think, uh, about like, what's the most useful robot. I think the arguments are typically like, oh, but, um, the spaces that we walk in and work in and live in, they're all designed for people. So why not making a robot platform that is having the same form factor and can squeeze through tight places and use all the tools and all of that. It kind of makes sense to me.

[00:26:25] Um, but again, like coming back to my earlier point, right? Where I'm thinking like general purpose robots are really, really far away. Um, and I think the, um, like narrow, like the, uh, let's say closer future, not the future of everything, but the future in like the next few years. Uh, it's maybe, um, it's maybe going to look more at like very specific purpose robots that are maybe on wheels because that's just easier, right? Like you don't have to worry about this. Um, and they can do relatively specialized things in like one environment, like going to a grocery store and doing restocking, um, or things like that. Right? Um, 

[00:27:04] Russ Altman: I've also heard that you have to be careful about making it humanoid because then humans might impute to it human capabilities, human emotions. And by having it look like a weird device, it reminds you that indeed this is a device and maybe the user interaction might be more natural and less misled because you start, you know, you don't treat it like it's a human and that might not be the goal. In other cases, like for care of the elderly, maybe you want it to look humanoid because it might be more natural. But okay, that's a very, very helpful answer. 

[00:27:37] Jeannette Bohg: Yeah, I think this is a very good point, actually, that people probably attribute much more intelligence, uh, whatever, whatever way we want to define that to a humanoid robot rather than to something like TidyBot that we had, right? Which is just one arm. It really looks very robot, I have to say. 

[00:27:55] Russ Altman: So what is the outlook to finish up in the last minute or so? Where are we with this platform? And when are you going to start shipping? 

[00:28:04] Jeannette Bohg: We published this on Twitter basically. There were lots of people like how much money, like when can I buy this? And, uh, and yeah, again, like it's, we're pretty far away from having like a robot that we can just literally, uh, give you and then it's gonna work, right? 

[00:28:19] Like, I think there's so much engineering. I think you can probably bring it up like similar to autonomous driving, right? Like fairly, maybe easily to ninety percent, but then the rest of it is all these corner cases, right? That you have to deal with and it's going to be really hard. So I don't want to make a prediction of when we're going to have this. Again, I think it's going to be more like more special purpose, uh, robots. Um, again, maybe a Roomba is maybe not so far away with an arm, right? 

[00:28:48] Russ Altman: I love it. I love it. And I know that in the academic world, ninety percent and cheap will lead to a lot of innovation. 

[00:28:56] Jeannette Bohg: Right. That's the other point, like, when is it affordable, right? Like nobody's going to buy a robot that is as much as a luxury car, right? 

[00:29:04] Russ Altman: Right. 

[00:29:05] Jeannette Bohg: That can't even do anything really well. 

[00:29:07] Russ Altman: Right. 

[00:29:08] Thanks to Jeannette Bohg. That was The Future of Robotics. 

[00:29:11] Thanks for tuning into this episode too. With more than 250 episodes in our archives, you have instant access to a whole range of fascinating conversations with me and other people. If you're enjoying the show, please remember to consider sharing it with friends, family, and colleagues. Personal recommendations are the best way to spread the news about The Future of Everything. You can connect with me on X or Twitter, @RBAltman. And you can connect with Stanford Engineering @StanfordENG.

Related Departments