Simple rules could promote more trust in the criminal justice system
As police departments and criminal courts around the country begin to use complex decision-making software as a solution for the problem of racial bias in the judicial system, Sharad Goel, assistant professor of management science and engineering, recommends something simpler: an index card.
According to new research, simple rules (on an index card or a mental checklist) can be nearly as accurate as complex algorithms when designed using a method called “select-regress-and-round.”
Goel, who runs the Stanford Computational Policy Lab, says that computerized decisions do help curb bias, for instance, for a judge deciding whether a defendant gets released on bail. However, computerized decisions are often opaque: The defendant can’t see how a decision is made. This can lead to an erosion of trust.
Goel, MS&E PhD student Jongbin Jung, and co-authors from NYU, John Jay College and Microsoft Research developed a straightforward procedure for creating simple rules that are easily explainable. For the criminal justice system, the goal is to increase transparency to foster trust, without compromising public safety. “Simple Rules for Complex Decisions” is currently under review, and we talked with Goel to learn more about it.
Tell me about simple rules for complex decisions.
Computers are often much better than humans at determining the likelihood of an event, like the chance that you’ll get into a car accident or that a defendant will fail to appear at their trial. Because of that, there has been a push in criminal justice, medicine, employment and beyond for computer-aided decisions. But one downside of this approach is that these algorithmic decisions are often “black box,” meaning you don’t know how the computer came to its conclusion. In many contexts, including the criminal justice system, that opaqueness can lead to perceptions of unfairness. You have your day in court and the computer says, “Oh! You’re high risk!” So, the judge isn’t going to grant you bail. People think, “Why should I believe what the computer is saying?” We need a way to explain why a decision has been made so they feel the process has been fair.
We set out to see how well simple, interpretable rules compare to complex decision algorithms. These simple rules, which you can write down on a note card and explain to people, can dramatically improve upon human decisions. And what’s surprising is that in many cases there is very little loss in accuracy between the simple rules and the complex algorithms.
So, computer-aided decisions can be fair, but people won’t trust them unless they understand them?
In many cases, including the criminal justice system, I don’t think we can say, “Just trust the algorithm — it’ll make the best decision. Don’t worry about the details.” For good reason, people worry that decisions are not always made with their interests in mind. Without transparency, it’s hard for people to trust the system. Our “simple rules” work not only helps people make better decisions, but does so with a transparency that fosters trust.
When you were interviewed on This American Life recently, you said that it seems that many people don’t trust science and statistics anymore. What did you mean by that?
I think it’s tempting for people to dismiss evidence that conflicts with their political or social beliefs. My students and I might spend a couple thousand hours analyzing a dataset, trying to be as careful and objective as possible, but it’s still hard to persuade people that we haven’t stacked the deck, that we’re not driven by a partisan agenda. One of the reasons I’m drawn to engineering as a discipline, and to the “simple rules” work in particular, is that the goal is to come up with concrete solutions to important problems. Giving someone a tool to make better, less biased decisions has more impact than just telling them something they probably don’t want to hear.