Skip to content Skip to navigation

Research & Ideas

Search this site

Data science takes on racial bias

A group of scholars discuss how medicine, artificial intelligence, criminology, and other fields can better understand and anticipate bias and the ways it manifests in society.

A bronze statue of Lady Justice superimposed over connected data points and graph waves

Seeking a safer, fairer, and more just America. | Adobe Stock/Alex, Unsplash/Tingey Injury Law Firm, and Drea Sullivan

Amid a wrenching nationwide conversation on race and policing, a panel of expert data scientists asks, “Are our algorithms racially biased?”

Their answer is, not surprisingly, yes. More importantly, however, they discuss the many ways that data-intensive sciences – from medicine to artificial intelligence to criminology – can better understand and anticipate bias and the ways it manifests in society.

Join moderator Anthony Kinslow II, PhD, an entrepreneur and lecturer in civil and environmental engineering, and his guests, Sharad Goel, an assistant professor of management science and engineering with expertise in computer science, sociology and the law, and Allison Koeneckea doctoral candidate in the Institute for Computational and Mathematical Engineering.

Goel founded and directs the Stanford Computational Policy Lab, which specializes in addressing bias in the data platforms that shape governmental decision-making. One of the group’s efforts, the Stanford Open Policing Project, uses data approaches to analyze policing to inform and inspire meaningful reform. Koenecke works on the Fair Speech project, which studies increasingly popular speech recognition algorithms and the ways they may be biased against Black and non-native speech.

In the end, all agree that the problem is real, it’s harmful, and it must be addressed. The answers will not be easy, but they are out there, and the reward for success will be a safer, fairer, and more just America.