Skip to main content Skip to secondary navigation
Main content start

Which is more fair: a human or a machine?

A team of researchers harness the variability of human decision making to compensate for two flaws in machine-learning models: factoring in the unknown and the unknowable.

Judgment Call: Could AI help us subvert unconscious bias? | Illustration by Stefani Billings

Judgment Call: Could AI help us subvert unconscious bias? | Illustration by Stefani Billings

Companies, governments and organizations all over the world use artificial intelligence to help make decisions that were formerly left to human judgment.

Their expectation: that the algorithms underpinning AI are more efficient and more fair than is humanly possible.

But how do humans and machines compare in terms of fairness; that is, making the optimal decision given the available information? Turns out science can’t say for sure. Humans are prone to all sorts of biases, but algorithms have their own challenges.

These begin with how algorithms are trained, by feeding them “labeled” data. To train a computer to spot the difference between cats and dogs, for instance, the system is fed pictures labeled “cat” or “dog” until the algorithm is able to make that distinction on unlabeled pictures. It’s trickier, however, to train computational systems to perform tasks that require human judgment.

Machine learning specialists can find examples of past decisions made by humans and feed these to an algorithm, but we can only feed computers information about known outcomes.

We can’t know what would have happened if the decision had gone the other way. Computer scientists call this the problem of “selective labeling.”

There is a second challenge to training computational systems to make judgments: There may be no way to know all the factors that influenced the human decision makers whose past actions were presented to the algorithm. Computer scientists call this the problem of “unobserved information.”

Computer scientists have known of these two related challenges and tried to take them into account. But in a new paper, Jure Leskovec, associate professor of computer science, and collaborators propose a new approach to this training conundrum: Study the track records of the human decision makers whose past examples will be part of the training mix, and choose those whose past decisions seem to have the desired, if fuzzy, outcome, such as fairness

Hima Lakkaraju, a PhD student in computer science at Stanford, worked with Leskovec on this new approach, which they devised during the course of co-authoring a scientific paper about a situation where selective labeling and unobserved information occurs: the process of granting or denying bail to a defendant charged with a crime and awaiting trial.

Such decisions have historically been made by judges, but in recent years also by algorithms, with the goal of helping to make the system fairer and thus better better and thus fairer. By law, defendants are presumed innocent until proven guilty, and bail is generally granted unless the defendant is deemed a flight risk or a danger to the public.

But deciding whether to grant bail is more complicated than distinguishing between cats and dogs. The training system still has to show the algorithm past decisions made by humans. Trainers can label whether the defendants granted bail by a human judge appeared in court or skipped town. But there is no label to identify the defendants who were denied bail and put in jail but would have broken bail had they been set free. Put another way, labeled outcomes are selective: more likely to exist for some defendants than for others.

Likewise, human judges may be influenced by variables that seem reasonable but can’t be recorded by the training materials. Leskovec offered the example of a judge who makes a mental note of whether the defendant’s family appears in court to show support. But the training system may not capture this unobserved information that played a critical part in the judge’s decision. Algorithms will never record all the “soft” information available to judges, so how could algorithms perform better/fairer decisions?

Leskovec and collaborators propose to take what seems like a problem—the variability and opaqueness of human judgment—and make it an input of the training system. Their method harnesses the natural variability between human judges to train systems using the judgments from the most lenient humans, and then challenging algorithms to do better. By learning from the most lenient judges, who release the most defendants, the algorithms are able to discover rules that lead to better/fairer decisions.

In addition to bail-setting, the researchers believe this methodology could be applied in other judgment-based scenarios; for example:

  • Banking. Here, increasing the level of fairness might involve efforts to train AI to prevent redlining poor neighborhoods. That’s the term for the illegal but prevalent practice of avoiding low-income areas because of a real or perceived elevated risk of default. Applying the Stanford fix, AI trainers might search out successful examples of profitable lending in low-income areas and use this to train algorithms.
  • Medicine. Patients visit doctors complaining of wheezing and coughing. Doctors must decide whether the symptoms are mild enough to send the patient home without prescribing a powerful and costly asthma drug. The labeled outcome is whether the patient gets better after the mild treatment—or comes back with an asthma attack within two weeks. But information about outcomes is not available for people for whom the doctor made the choice to prescribe the powerful drug. In this instance, increasing the level of fairness might include new ways of looking into the record to find doctors with great track records in this regard.
  • Education. In some school districts, teachers can decide whether to assign a struggling student to a remedial program or other educational intervention. Although we can see the outcome of a student assigned to the intervention, we can’t see whether other students, who were not candidates for interventions, would have done better with the extra assist—a problem of selective labeling. Again, making this fairer would mean looking more carefully at the training examples to make sure that the algorithm is being fed the best examples.

The common denominator in these examples is that the decision makers’ deliberate choices determine which outcomes are known.

“Let’s devise ways to give proven decision makers additional weight in teaching machines,” Leskovec said. “It’s a way to bake best practices into the AI systems.”

Related Departments