Skip to content Skip to navigation

Assistant Professor of Management Science and Engineering and, by courtesy, of Sociology and of Computer Science

May 2017
I look at public policy through the lens of computer science. For example, I use data and statistics to detect bias and improve decisions in the criminal justice system.


My background is in math, and I didn’t always work on policy problems. About five years ago I was living in New York City, and was struck by how the courts dealt with stop-and-frisk. A key question at the time was whether the practice violated Fourth Amendment protections against “unreasonable searches and seizures.” Standard practice is for judges to review the facts in a case and then decide. But with millions of recorded stops, there is a limit to that traditional approach. I thought there must be a better way by drawing on tools from machine learning. My collaborators and I analyzed the data and discovered that nearly half the stops were based on very little objective evidence. Ever since then I’ve been forging down this path of thinking computationally about public policy.

I’m currently building tools to help judges make better bail decisions – ones that aren’t prone to the implicit biases of unaided human judgments. Computers are typically better than humans at estimating risk. Algorithms can pick out which pieces of information matter and which should be ignored. And, unlike human judges, algorithms apply consistent standards. We’ve looked at over 100,000 judicial decisions. By using an algorithm, we find that you could detain half as many defendants without increasing the number who fail to appear at trial. A lot of people who pose very little risk are being needlessly detained. There are huge social and financial costs to such unnecessary detention, including family separation and job loss.

But it’s important to remember that algorithms aren’t a complete fix. Algorithms are good at narrowly estimating risk, but they can’t set policy. They can’t tell you how many people to detain, or whether we should end money bail altogether, as some cities have done. They can’t tell you how much to invest in pretrial services or what those services should be. And they can’t incorporate every factor in every case, so we still need humans to make the final decisions.

I didn’t always think of myself as an engineer. My perspective shifted when I came to Stanford and started building tools to tackle policy problems. I want to go beyond understanding how systems function; I want to act on those insights to design better systems. To me, that’s what engineering is all about. I work in the Department of Management Science and Engineering (MS&E). The binding thread in the department is a collective interest in designing complex systems, from improving the actions of individual decision makers to guiding the emergent behavior of teams, organizations and markets.

On June 19, my colleagues and I will release a massive new dataset of police stops. We’ve spent the last two years collecting and analyzing information on more than 100 million traffic stops across the country. This effort, which we call the Stanford Open Policing Project, is a first of its kind collaboration between the School of Engineering and the Computational Journalism Lab. By sharing the dataset, we hope researchers, journalists, government officials and community members will dig even further into it to understand and improve police practices.

Amanda Law