Skip to main content Skip to secondary navigation
Main content start

The future of language learning

An expert in language learning says that there is much to be gleaned from studying similarities and differences in learning between children and AI.
Top view of a baby’s legs with numbers and letters.
How does baby babble turn into a nuanced understanding of language? And what does our understanding of this process mean for AI? | iStock/divulikaia

Cognitive scientist Michael Frank studies differences in how children and AI learn language.

There is a “data gap” between the billions of words ChatGPT has to work with and the millions of words a toddler is exposed to. But, says Frank, children learn in a rich social context that supports their learning. He’s currently conducting the “BabyView Study,” where he puts cameras on young children's heads to help him understand their learning experience, as Frank tells host Russ Altman on this episode of Stanford Engineering’s The Future of Everything podcast.

Listen on your favorite podcast platform:

Embed Code

 

Transcript

[00:00:00] Michael Frank: There's been a long tradition of trying to use artificial intelligence and, you know, the precursors to the current systems to try to understand kids learning, with the idea that these could be real scientific models that we could study in detail. And a lot of that history was really cool, but the models we were looking at were not very capable. So suddenly when something like ChatGPT explodes onto the scene, all of a sudden, we're thinking, wow, we could really learn a lot.

[00:00:31] Russ Altman: This is Stanford Engineering's The Future of Everything, and I'm your host, Russ Altman. If you enjoy The Future of Everything, please follow or subscribe wherever you get your podcasts. This will guarantee that you never miss an episode. 

[00:00:43] Today, Michael Frank will tell us that both children and AI learn language, but there are big differences and some similarities. It's the future of language learning. 

[00:00:55] Before I get started, please remember to follow the podcast if you aren't doing so already. Hit the bell icon if you're listening on Spotify. This ensures that you'll get alerted to all the new episodes and will never miss the future of anything.

[00:01:15] You know, we've all seen the miracle of how children learn languages. As babies, they start out, they babble, they do random things, random noises, but then they start to acquire a few words, they begin to put them together into sentences, and before you know it, they're fluid, they can speak, they can interact. Not only that, they can understand cultural cues, social cues. They even can infer unstated things that are implied by the things that people say to them. 

[00:01:46] Well, Michael Frank is a professor of biology at Stanford University and an expert on how children acquire language. He studies not only how they learn the words, but he also studies how they use language to understand the world, the social interactions in the world, and basically how the world works.

[00:02:05] So Michael, you study children and how they learn language. This is particularly interesting these days when we're seeing AI systems also learning language. Do they have anything to do with one another? 

[00:02:16] Michael Frank: Absolutely, Russ. Yeah. So, this is fascinating. It's an amazing moment to be in cognitive science of language because we're seeing for the first-time other agents that are at least able to produce grammatical sentences of English and sometimes they even seem to make sense and really, uh, be meaningful in some way. So, uh, there's been a long tradition of trying to use artificial intelligence and, you know, the precursors to the current systems to try to understand kids learning with the idea that these could be real scientific models that we could study in detail. And a lot of that history was really cool, but the models we were looking at were not very capable. So suddenly when something like ChatGPT explodes onto the scene, all of a sudden, we're thinking, wow, we could really learn a lot from what it means to just kind of learn English, learn a natural language from observing enough data. Of course, there are still big gaps. 

[00:03:09] Russ Altman: Yeah, so, you know, I have to confess this right at the very start. I have a 16-month-old granddaughter, and so I am watching her in real time, uh, learn language. And I also know a little bit about large language models, and one of the things I know, and I think everybody knows, is they've been exposed to a ton of data. And I watch my granddaughter, and the way she's learning language is not because of being exposed to a ton of data, although it's an extremely rich environment that she's in. So, tell me how you as a kind of professional in this area, how do you try to learn things when the two learning methods seem to be very different?

[00:03:46] Michael Frank: Well, so it creates a really exciting scientific question. We think of this as the data gap between human learners and artificial intelligence. And the big question is what is responsible for that data gap? So, what's the, first of all, you want to quantify the size of the gap. So, something like GPT 3, which is now, you know, several generations back was exposed to 500 billion words. And you're 16-month-old, that's probably heard 15 million words. 

[00:04:14] Russ Altman: Wow. 

[00:04:14] Michael Frank: So yeah, so it's more data than you might think. 

[00:04:18] Russ Altman: Probably mostly from me. I'm sorry to say, but that's a whole different. 

[00:04:21] Michael Frank: So yeah, there's a million words a month is a lot of words actually. Does

[00:04:27] Russ Altman: that include repeat words?

[00:04:28] Michael Frank: Oh yeah. So that's, uh, being immersed in this, uh, you know, environment where people are talking to you. 

[00:04:35] Russ Altman: Yes. 

[00:04:35] Michael Frank: Or around you and you know, that's counting kind of everything. Probably the amount that's actually. 

[00:04:41] Russ Altman: Okay. 

[00:04:42] Michael Frank: You know, something you could really learn from as a baby, as opposed to your parents talking about the credit card bill is pretty different.

[00:04:47] Russ Altman: Right. 

[00:04:48] Michael Frank: But then you mentioned kind of what are the big differences, right? The language that your granddaughter is hearing is social, it comes in this kind of complex context where there's something happening, something being communicated about. It's grounded in the world around her. It's got all kinds of extra multimodal, meaning like lots of different sensory modalities, information. She's got vision, she's got touch alongside the sound. 

[00:05:14] Russ Altman: Yeah.

[00:05:14] Michael Frank: Um, so there's lots more richness to what she's experiencing in those millions of words than what ChatGPT gets from its many hundreds of billions.

[00:05:23] Russ Altman: Great. Okay. So, I want to step back because, you know. You were doing this way before ChatGPT came in, and I want to make sure we look at some of the things you're doing. One of your approaches involves lots of data, you know, um, big data approaches. And so, could you tell us about some of the resources that you've set up, and how you use them to learn about, uh, language acquisition?

[00:05:43] Michael Frank: Absolutely. So, uh, despite the fact that, you know, language learning, if you're talking about is a big data problem. A lot of the earlier approaches to studying language were very bespoke, it was studying one child, maybe, uh, somebody keeping a diary or doing an experimental study of 15 kids at the nursery school. And one of the things we've learned is that that kind of small data approach doesn't really work because of the diversity of outcomes for kids even in one culture, and then the vast diversity of the languages and cultures that kids are learning in around the world. So, what I've tried to do is really buildup databases that give us some rich quantitative information about what language learning looks like around the world.

[00:06:25] One of those, our flagship, is called Wordbank where we've got maybe close to 100,000 kids worth of data um, from something like 50 languages or dialects around the world. Uh, and what we've got actually is what parents say about them. So, we've got parents checking off my child knows ball and dog and, uh, cup, but not alligator. 

[00:06:46] Russ Altman: Okay.

[00:06:46] Michael Frank: Um, and so for each child, we've got this kind of, uh, holistic assessment by the parents, sometimes many times, sometimes just one time, uh, of what they know, what they're saying and what they, what the parent thinks they understand. Um, and that allows us to look at variation in all sorts of really interesting ways.

[00:07:02] Russ Altman: And the parents are good reporters in your experience? Like there's not a lot of, uh, I don't know, projection of their hopes and dreams of the child onto your data set. 

[00:07:11] Michael Frank: Well, um, so this is part of the craft of, uh, kind of doing good research in this area. You don't say, hey, um, you know, how brilliant is your child? Check very brilliant, exceedingly brilliant, or, you know, off the charts and also adorable. Um, you know, you, you've got, uh, what we do is we ask questions about observed behaviors that are happening right now, not retrospectively. And we do assume there are some biases involved here. So, there are certain things we have to take with a grain of salt, but in general, people don't have a specific bias that their child does or doesn't say alligator. 

[00:07:44] Russ Altman: Yeah.

[00:07:44] Michael Frank: They either remember that happening or they don't. And so maybe any individual word is a little noisy. Maybe we're not sure if they actually said, um, you know, cat or not, but averaging over thousands of kids, we can actually get quite a detailed measure and we can learn, you know, unsurprisingly that cat is learned before alligator.

[00:08:04] Russ Altman: And now you mentioned, I think you mentioned that it's almost a hundred thousand kids, and I think you said multiple languages. So, are there differences? Are there easy languages and hard languages to learn? Or for a kid, is it all just like bring it on, I'm going to learn whatever we're talking here. 

[00:08:19] Michael Frank: You know, the amazing thing, and I think this is really a discovery. You know, not just by me, but by language acquisition in general, is that kids really can learn? You know, the whole variety of human languages. And that's for an interesting reason. It's because languages are evolved to be learned by kids. 

[00:08:35] Russ Altman: Yeah. 

[00:08:36] Michael Frank: So, a language can't survive as a language if it's not learnable. So, but, uh, you know, largely the thing that pops out of our research is how consistent the process of learning is across very different languages. Now, of course, the details are different, the words are different, the grammar is different, but the general factors that influence language learning around the globe appear to be quite similar.

[00:08:57] Russ Altman: Very interesting. Now, I don't know if this is science or just something I read in People magazine. But is there a time when the babbling, like, so my 16-month-old granddaughter is babbling shit. There are a few words, but it's mostly about, you know, blah, blah, blah, blah. Do those diverge quickly, and when this child is destined to learn different languages, or is the babbling all kind of sound the same and like, when do they start sounding like the language that they're ultimately going to speak?

[00:09:24] Michael Frank: Yeah, they actually really do kind of start along that path. So, my father-in-law is a speech pathologist, and he loves to say that what Um, babies are doing is playing the instrument, right? The way when you pick up a guitar, you could just kind of go, strum, strum, strum, strum, strum. And you get this kind of the open strings. You don't get music. 

[00:09:43] Russ Altman: Yes.

[00:09:43] Michael Frank: But you get something that sounds guitar like. And babies are kind of exploring their vocal instrument, or for sign language, uh, acquiring babies, their manual instrument in kind of the same way. So, you see these language-like elements start to emerge as they figure out, oh, that works. Oh, that sounds kind of like what I'm hearing. Uh, it looks kind of like what I'm seeing in the case of the signs. 

[00:10:04] Russ Altman: Great. So that, actually makes, makes really good sense. And I love that analogy to instruments. So, in addition to Wordbank, I'm wondering, you, you made a mention, you made a reference to the fact that when the human kids are learning language, it's a multimodal experience. They're seeing things, they're hearing things, they're learning. And I know you've studied this, the social cues. Um, do you have any, ability to collect that kind of data? Um, I could imagine it would both be useful for you, and it might give the hint to the AI guys about how they might be able to be more efficient in their learning. So, what's the, our, the status of our ability to study these multimodal elements of language learning? 

[00:10:42] Michael Frank: Well, creating those kinds of data sets is one of the biggest challenges right now for us Developmentalists. So actually about 10 years ago we started creating a resource that's become very useful. Uh, it's called SAYcam. So, the idea was that we had these for the first time, little cameras that we'd put on kids' heads, and we tried it a bunch in the lab, and it was fun. And then, uh, a couple of parents and I got together. Um, I actually didn't have a kid at that time, that's how long ago it was. And they said, we're going to try this two hours a week, you know, every week from six months until you know, the kid stops wearing it, sometime around two and a half - three. And we created this corpus of videos from the child's own perspective.

[00:11:22] Russ Altman: Wow. 

[00:11:23] Michael Frank: And critically, uh, the videos of these parents were developmental psychologists and committed to their craft and were willing to let these sometimes somewhat unflattering videos out, um, in, not in public, but in a research repository for other researchers to use. 

[00:11:37] Russ Altman: Yeah.

[00:11:37] Michael Frank: And as a result, there's really exciting research being done with these videos. 

[00:11:43] Russ Altman: That is really exciting. So, have we gotten to the point where we can, like, get some preliminary results from that? Or is it still very much in the infrastructure building phase? 

[00:11:51] Michael Frank: So, some super cool results are actually coming out really soon, doing machine learning on one kid's experience from this corpus. So, you can really get a deep dive into what you can learn from just a little bit of one child's life. 

[00:12:03] Russ Altman: And I can see that it was, it must have been like tracking their head movements, so you can really see where they're focusing. 

[00:12:08] Michael Frank: But you know, this was still by machine learning standards, pretty small data. So, we collected 400 hours and that's like about eight or 10 percent of one child's waking hours for one year. And so of course, you know, these greedy AI folks, um, so one of my collaborators, wonderful guy named Dan Yamins came to me and said, you know, this was great, and I'm sure you, it was very hard for you to collect, but could you just do something like 10 times as much? So that's really what we're doing now. We've got this study in progress called the BabyView study uh, which is essentially a baby GoPro. We hooked up a GoPro rig for, um, very small kids. And we're sending these things out, uh, you know, all over California and all over the US, um, and having parents put them on their kids once or twice or even three times a week and gathering these, uh, data that really reflect what the child's experience is like. And so that's going to create, hopefully, you know, once it's done an even more useful resource, both for scientists who want to characterize what do kids see, and also for AI folks who want to train models a bit more baby like than chat GPT. 

[00:13:12] Russ Altman: That leads me actually to the next set of questions I wanted to ask, which is like, when you're getting this data, you're going to see what they're looking at and who they're taking cues from. And I know you've looked at, um, you know, language is not just language. It's a social tool. And you've looked at the social development of children. Can you give us some examples of how language acquisition kind of dovetails with the acquisition of social skills and being able to pick up social skills? I actually think this is very important also, because I've heard that people are worried about, um, cell phones and the potential that they're stunting some development of basic human capabilities in terms of social interactions. So, I don't know if that comes up in your work, but is there anything there to say? 

[00:13:54] Michael Frank: So just starting out with what, uh, social learning looks like. I mean, if you think of the baby as kind of a code breaker, this is kind of funny. But think about, you know, your granddaughter there sitting on the floor and the parents are talking about the credit card bill. How do you break that code? You've never heard of a credit card bill. You can't see one. It's not there. They're talking really fast. But if you think about, um, now her parents talking to her in a kind of a grounded social interaction, there's toys in front of us. They're giving kind of short, repetitive phrases that are really grounded in something that she's interested in, the toy she's playing with right now. 

[00:14:30] Russ Altman: Yes.

[00:14:31] Michael Frank: And you know, that's part of maybe a routine or a game where she can kind of figure out the rules. Um, and they use all these rich cues, like looking at the thing together, they're pointing to it. Um, all of that is going to create a situation where she can break that code. You can figure out what the name of something is, and maybe once you know the name, you can figure out what that word means, like kick the ball, right? So, then you're saying, kick the ball. Yeah, good job. Kick the ball with your foot. Um, so you get this, uh, set of cues that can help you kind of progressively break into more and more of this code when, uh, especially when it's grounded in these kinds of social interactions that follow the child's lead and give them all those social clues and contextual clues.

[00:15:12] Russ Altman: Now, I think you've also done some, what looked like to me, fascinating work that the children actually evaluate, like, how reliable the person who they're interacting with. Like, I'm guessing that mom is rock solid, but other people, it's like, I don't know, I don't know how much I'm going to believe what they're saying. So, um, what have you found about the kid's ability to kind of, I'll use the word decode, like the, um, the, uh, reliability of someone they're interacting with and how much they can really latch on to the words that they're learning. Any surprises there? 

[00:15:45] Michael Frank: Yeah, so I mean I don't think we, we think that kids are going around being super skeptical of all the people around them. I'm not going to learn from you, you know. But at the same time, it is really, you know, interesting and useful generally as a social agent who's trying to learn about the world to figure out who to learn from. You know, uh, you want to take lessons with the expert in some sense. 

[00:16:03] Russ Altman: Yes. 

[00:16:03] Michael Frank: And that can just mean which of your peers is better at this skill or kind of more reliable. And so developmental psychologists often use that idea to kind of figure out what kids know and how actively they're seeking out information. Um, so we did this one study where we looked at, you know, kind of how kids, uh, process uncertainty and we'd put out a ball and a, another toy, you know, whatever, uh, like a box or something. We'd say, okay, look at the ball. And the kid goes, okay, here's the ball. And then you say, uh, and where's the blicket? And the kid looks at you like, huh? Which one's the blicket? You got to tell me here. And so, we actually were studying that huh response to try to figure out if the kids were actively tracking and looking to their social partner for more clarifying information and what they would do. And you know, of course, what they do at different ages changes. Maybe the, the littler kids are just puzzled. The bigger kids may actually ask a question or point to it, um, or make a guess and kind of try to get some affirmations. So, you can really see these active learning strategies where the kids are helping figure out that code by their own actions. 

[00:17:09] Russ Altman: You know, I just going back to my situation, you know, as the grandfather, I just spent a lot of time just staring at her. Uh, and it's very clear that her mother and her big brother are two incredibly important sources of information that she tracks noticeably more than everybody else in the room. And so, this kind of all, um, makes sense. 

[00:17:31] This is The Future of Everything with Russ Altman, more with Michael Frank next.

[00:17:48] Welcome back to The Future of Everything. I'm Russ Altman and I'm speaking with Professor Michael Frank from Stanford University. 

[00:17:54] In the last segment, Michael told us about some of the ways that he is gathering data in order to learn across multiple cultures and geographies how children learn language.

[00:18:05] In this segment, he's going to tell us about an exciting project where he collaborates with people all over the world to increase the size of the database and the ability to quickly test hypotheses about what's universal and what's specific about childhood language acquisition. 

[00:18:22] Michael, another project you have is called ManyBabies, which I love the name. Uh, what are the goals of ManyBabies and how is it kind of advancing your research? 

[00:18:31] Michael Frank: We are generally interested in trying to understand how child development varies across different cultures and communities and individuals. And, you know, um, if your traditional strategy is studying 16 kids at a nursery school in Palo Alto, you don't get a lot of traction on that kind of topic. So, what we did, uh, you know, starting almost now, you know, seven, eight, nine years ago is sit down with a group of folks interested in this larger scale science and say, how could we distribute the same experiment around the world? How could we get not just a couple of babies, but many babies? 

[00:19:05] Russ Altman: Many babies. Gotcha. 

[00:19:06] Michael Frank: So, we ended up with this consortium of researchers all over the world. Um, and we decide on something that's really critical, something where we really have a lot of uncertainty about exactly what, uh, babies know, um, or what they can do. And then we create a consensus protocol and experiment often it's as simple as you know, playing babies, infant directed speech like that high squeaky, like, hi, look at me. Do you see this? Um, and then. 

[00:19:34] Russ Altman: is that what we call it? That's great. Infant directed speech. I spend a lot of time doing that. 

[00:19:38] Michael Frank: Oh yeah. They used to call it motherese, but you know, uh, sometimes dads and granddads are, uh, experts. So, um, our first ManyBabies study was actually studying the global preference of babies for infant directed speech. And the world around babies like it when you talk high and squeaky to them. It catches their attention, they focus longer. Even, you know, if in their culture, they're not hearing a lot of infant directed speech, they still like it. Even if the infant directed speech is in English, and they're not learning English, still sounds kind of appealing to their perceptual system. It draws their attention. 

[00:20:11] Russ Altman: So, this is fascinating because there's a whole bunch there. But, um, in terms of the organization, you've basically created a worldwide network where hypotheses can be tested. You all agree to do it. And then, uh, it also sounds very kind of just in the sense of justice. Because you're doing it, not just in an English centric way, but you're learning much more general principles about human language acquisition. So, it sounds great. Who supports this kind of work? 

[00:20:36] Michael Frank: Well, you know, to be honest, this is actually one of the projects that I've had the hardest time finding financial support for. Uh, you know, and this is maybe a sad fact about the way that science funding works, right? When you say, okay, I've got a consortium of, you know, top universities in the U. S. that's going to study something. Let's bring it to the National Institutes of Health or the National Science Foundation. But if you say, I want to do, you know, I want to give $3,000 to a lab in, you know, um, in Uganda to, uh, redo this study. It gets a lot harder to find somebody who's willing to send that money, even though it's a tiny amount of money and it makes a big difference to that lab to buy them the monitors and the speakers. This kind of global consortium work where you flexibly send a little bit of money to incentivize a collaboration, it's actually very tough to do. But so much of our work has been volunteer, which says a lot about the, I think the enthusiasm globally for this kind of research.

[00:21:28] Russ Altman: Exactly. No, it sounds great. And it really is a good model. It's obviously a good model for making sure that the findings are generalizable. Okay, just moving to another topic, you write a lot about pragmatics in your work, and sometimes you counter, you use it in distinction to other concepts like vocabulary. So, can you tell me what pragmatics is in language acquisition and in kid's development? 

[00:21:52] Michael Frank: Yes, so we've been talking a lot about social cognition and the social use of language. And one of the things about being an adult who uses language all the time is we don't even recognize how much we infer about the social context of language. So, like, for example, if I say, you know, I gave an exam in my class and some of the students passed the test. You jump to the conclusion, well, that must have been a hard exam, not all of them passed the test. I didn't say that, could have been that all of them passed some, actually, even all passed. But so that's an example of a little teeny tiny inference you make, a little jump in logic that you make. And we think of this kind of the advanced mode of, um, you know, solving all these social inference questions that we were talking about with word learning. 

[00:22:35] Russ Altman: Yeah. 

[00:22:35] Michael Frank: So, what we've been interested in over the past 10 15 years is how those abilities to make little social inferences develop in young children. And a lot of the time what we do is just create kind of more kid friendly ways to ask the question. So instead of talking about some of the teachers passing the test, we say, you know, uh, here's a puppet with a hat and glasses, and here's a puppet with just glasses. And you say, um, okay, my friend has glasses. Which one is my friend? 

[00:23:03] And the kids, not so much the three-year-olds, but three-and-a-half-year-olds, four-year-olds, they start to say, oh, glasses and not a hat, they make the inference.

[00:23:12] Russ Altman: I see. 

[00:23:13] Michael Frank: And so that's a really kind of fun way to, um, you know, in a kid friendly way look at this little logical leap that they're making.

[00:23:21] Russ Altman: Yes. And I know that, uh, in the AI research community, there's a lot of concern that these systems don't have what people are calling common sense. Uh, which is related, I think, to some of the things you're talking about here, where we have these underlying models of the world that are not ever articulated unless, you know, unless it's the study that you're focusing on. And that can lead to, um, weird behavior from AI that you don't usually get from humans. 

[00:23:47] Michael Frank: Yeah, absolutely. And so, kind of at the forefront of AI safety is like, well, okay, the AI should try to infer what you mean and get at your intent rather than at the kind of the literal, uh, meaning. You don't want to say, you know, um, uh, like leave some kind of critical loophole in your instructions to the AI, kind of a la 2001, a space odyssey, right?

[00:24:09] Russ Altman: Yeah. Right. 

[00:24:10] Michael Frank: So, it's really important that the, uh, these sorts of agents really be thinking in a charitable way and a human like way about the intentions of the other person who's talking to them. And that's what pragmatics is. 

[00:24:21] Russ Altman: Yep, thank you. Okay, so in the last couple of minutes, I want to, this is the segment where I'm going to ask you about like all of these things that parents want to know. Everybody wants their kids to acquire language, they want them to be fluent, they want them to have a big vocabulary. I mean, most people. So, tell me, we always hear that it's important to read to infants. So, it's almost now a truism. Like, I don't know what data this is based on. Is that true? And can you give me a feeling for why it is versus all the other ways you could expose them to language? Like, for example, you could tell me maybe, well, Russ, don't read to a child because it's a two-dimensional book and the world is much richer in three dimensions, and I would rather have them go out and interact with the world.

[00:25:06] So. What about reading to kids? How important is it and why? 

[00:25:09] Michael Frank: So, reading to kids is great. It's really fun to read to kids. They like it. It's enjoyable. Uh, and it's also, uh, you know, an important part of setting them up to learn about not just language, but also different kinds of language, right? You get different syntactic structures, you get different ways of describing the world, fiction, nonfiction, and so forth. Um, and if you think about the code breaking challenge that I was talking about. 

[00:25:31] Russ Altman: Yeah, yeah. 

[00:25:31] Michael Frank: Especially for a little one, a book is a great kind of key to that code. You're really showing them, okay, here's the picture of the thing we're both looking at it. I'm going to name it for you. I'm going to describe it to you. Maybe it's going to be in a memorable rhyme or a song or something like this. So, books are fabulous. 

[00:25:47] On the other hand, you know, the biggest thing that I learned about parenting and about, uh, kind of input to kids is a little bit of humility. Like, when you look at these giant databases, the thing that pops out is that kids vary tremendously in their roots into language. Some kids learn very quickly, some kids learn slower. Now, uh, if a kid is way on the slow side, you want to talk to a pediatrician or speech language pathologist and get intervention. But within the normal range, there's a lot of variation, and I think a lot of parents want to just kind of accelerate it, push it forward as fast as possible. But a lot of that is outside of our hands, and so that, you know, the thing that's best for the kid and also for the family is to have fun. You know, read a book if you want to read a book, play with the kid if you want to play with the kid, um, you know, enjoy time together, have that rich social interaction in a way that's fun, and not worry about, you know, the big data problem of shoving more words into their ear. 

[00:26:39] Russ Altman: Right. Okay, so forgive me for these next questions, but, uh, how early is it important to expose a baby to language? Uh, I am one of those fathers who spoke to my mother, to my wife's belly during pregnancy because I wanted the kids to recognize my voice. I mean, any science on that? 

[00:27:00] Michael Frank: Yeah, so, um, even in utero, uh, there is some auditory learning. There's some classic studies that show, um, sensitivity to voice and to the rhythm of speech, um, really right at birth. So, from, uh, uh, learning before birth. So that's really true. And, um, One of the things that modern developmental methods show us is that even when babies look like they're, you know, not able to talk when they're six, nine months old, you still get these traces of language knowledge that are building up and that, that is really important.

[00:27:32] So it's never too early to interact in this rich social way with language. Um, that doesn't mean you need to read them Anna Karenina. If you want to... 

[00:27:40] Russ Altman: Right. 

[00:27:41] Michael Frank: If you're already reading Anna Karenina and it sounds fun, go for it, I love the novel. But it can be in a way that's appropriate to your child and to your family. 

[00:27:50] Russ Altman: Yes.

[00:27:51] Michael Frank: But it's, you know, it is valuable for the kid. 

[00:27:54] Russ Altman: And so finally, what about kids who are learning multiple languages? Uh, either two languages, like, you know, I don't know, Spanish and English or sign language in English. Are there any special considerations or understandings that have emerged from those kids? And I know it's a much smaller numbers perhaps, but it's always fascinating to see kids who are in these environments where they're learning multiple languages simultaneously. 

[00:28:18] Michael Frank: Well, it's amazing because it's much smaller numbers in the U.S., but it's probably the modal, it's probably the, you know, the most common way that kids learn in the world. So, actually, bilingualism or multilingualism is really a norm, and the amazing thing is just how easy it is. I guess I like to think about it as like, you know, uh, nobody ever asks, hey, you know, you're riding a bike and learning to swim, but when you're on the bike, don't you, you know, do all this swimming? And you say, no, I'm on a bike. It's in the same way, if you've got contexts that support different languages, kids learn to use those languages in those contexts, and they do it fluidly and seamlessly. 

[00:28:54] You know, there's a tiny little, um, measurable, but kind of, we think pretty insignificant delay. And at the point when the kid is figuring out, oh, there are two languages and you've got to use them in these ways. 

[00:29:05] Russ Altman: Right. 

[00:29:05] Michael Frank: But overall kids really do sort it out in a lot of different situations and they do so without much, uh, you know, without really measurable negative consequences. In fact, with a really huge measurable positive consequence, which is that they learn more languages.

[00:29:21] Russ Altman: So, we, and finally, we all know that kids are amazing at learning languages, especially compared to adults. When does that magical ability to like learn a language, when does it start to taper off and they become more brittle like me, for example? 

[00:29:35] Michael Frank: Well, this is fascinating, right? Because, you know, you could go head-to-head with, um, your granddaughter and you'd probably memorize a lot more vocabulary words a lot faster if you, you know, both start learning, you know, Sesotho or something. Um, but what would happen is that you are ceiling out, you top out a little bit sooner because, you know, uh, you start learning these kinds of fixed grammatical rules, maybe your accent never gets that good. And she's going to go slow and steady, but eventually win the race and get a good accent speaking that language and really, um, you know, figure out all those complex endings that go on things.

[00:30:11] So it's less that the ability goes away and more that there are real changes in the way we learn that seem like they're happening gradually, maybe over the course of the teenage years. So, you know, you put an elementary school student in a new language context, and they largely achieve native like proficiency if they're given long enough. But at least by your 20s, that seems to decrease. 

[00:30:35] Russ Altman: Thanks to Mike Frank. That was The Future of Language Learning. You have been listening to the Future of Everything podcast with Russ Altman. With close to 250 episodes in our library, you have instant access to lots of good stuff about The Future of Everything.

[00:30:51] If you're enjoying the show and learning something, please rate and review. Why not give us five stars? That'll help us grow, and it'll help others learn about their podcast as well. You can follow me at Twitter or x @RBAltman, and you can follow Stanford Engineering @StanfordENG.