Skip to main content Skip to secondary navigation
Main content start

Diyi Yang on language, computers, and societal impact

Yang is a computer scientist who specializes in language technologies, with a focus on building language models that are more socially responsible.
Portrait of Diyi Yang, seated on a chair looking into the camera.
Diyi Yang | Andrew Brodhead

Diyi Yang speaks two – sometimes three – languages. She has lived in different states and countries and knows, from both academic study and personal experience, the wonders and intricacies of the spoken word. 

“I am deeply inspired by the way people who come from different countries interact, exchange information, and socialize with one another using different languages,” said Yang. 

Yang works on natural language processing, which combines math and theory to enable computers to communicate, through speech and text, in a human-like way. Specifically, she’s focused on making the technologies that power human-computer (and some human-human) communication more socially responsible through such actions as quantifying their biases, making sure they include low-resource languages and dialects, and aiming for positive applications, such as use in medical settings and conflict resolution. 

For more about Yang and her work, here’s the who, what, how, and why of Assistant Professor Diyi Yang: 

Who are you? 

I am a computer scientist who loves language. I study and build language technologies to support better human-computer and human-human interaction. I lead the Social and Language Technologies Lab at Stanford CS, where I work with a group of talented students with diverse backgrounds on building socially aware language technologies for social impact. 

What do you research? 

Large language models (LLMs) are artificially intelligent programs that use large amounts of data to perform language-based tasks, and they have revolutionized the way humans interact with AI systems, transforming a wide range of fields and disciplines. However, there is a growing amount of evidence and concern about the negative aspects of language technologies such as the lack of awareness of human factors, biases, and risks. These issues must be addressed for LLMs to have a larger positive impact. 

My research aims to build socially aware language technologies by developing innovative machine learning algorithms, extending theories in social science and linguistics, and deploying human-centered systems for real-world use, with the goal of both improving machine intelligence and making natural language processing more socially responsible. 

One of our recent research aims is to build machine learning algorithms that prioritize participation from global communities in language technology. In order to accomplish this, we need to consider how different people vary in their language use, especially in low-resource data domains. For instance, our research has demonstrated that current AI models often exhibit linguistic prejudices. To this end, we further develop scalable linguistic resources and efficient transfer learning approaches (techniques that repurpose a machine learning model for a new task) to mitigate these disparities. Our research has contributed inclusive language technologies and linguistic resources for many English dialects including African American Vernacular English, Chicano English, Indian English, and Appalachian English. We have also developed parameter-efficient fine-tuning techniques to adapt models trained on mainstream data to the speech of underrepresented communities. 

The other line of our research focuses on building societally important applications with real-world impact, ranging from mitigating biased language to improving patient communication in online cancer support groups. For instance, our work quantifies biases in AI Chatbots by showing that existing LLMs are often biased toward Western-centric values and have unintended consequences on global representation. Currently, we are building LLM-empowered AI agents to help people learn diverse social skills, via a framework called AI Partner and AI Mentor, in which learners interact with AI partners and receive coaching from AI mentors. So far, we have developed Rehearsal, which helps users practice conflicts with a believable simulated AI Partner and learn through feedback from an AI Mentor on how to apply specific conflict strategies. We also have developed CARE, an interactive platform where novice counselors can practice with AI patients and are coached by an AI mentor to improve their counseling skills. We are at the stage of deploying the aforementioned technologies with relevant stakeholders to test how well such systems work in real-world settings. 

How did you end up where you are in your career? 

I’ve always been fascinated by language and how it shapes our interactions with others. After finishing my undergrad at Shanghai Jiao Tong University in China, I moved to Pittsburgh in Pennsylvania to start my graduate school at Carnegie Mellon University. The cultural and language differences were a big part of my transition to Pittsburgh. This also makes me realize that language is more than just words and grammar; it is intrinsically linked to both society and culture.

Certainly, there were many challenges, especially for a person whose first language is not English and who is excited about building better language technologies. There are also other challenges around how to balance and bridge these technical implementations of algorithms and these abstract social factors embedded in language. Many of these challenges provide me chances to reflect and think, gradually transforming into what we are working on now. Now, our lab is working towards advancing AI with a strong emphasis on social awareness and positive impact! 

Why are you passionate about your work?

There are two main aspects that keep me passionate about our work. First, large language models are revolutionizing the way we interact with AI systems, transforming a variety of fields and disciplines. It’s these possibilities, unknowns, and complexities that keep me excited about how we can advance machine intelligence. More importantly, I am passionate about developing a future where humans and AIs can collaborate to achieve greater collective intelligence in a variety of contexts, such as education, healthcare, and the workplace. Second, I enjoy teaching and mentoring! Seeing students succeed in their own paths is incredibly rewarding. Collaboration, especially with domain experts from different research fields, is my favorite way to get at the core of questions at the intersection of computation and society.

Discover more in the Stanford Report

Related Departments