AI
Judges remain skeptical on whether artificial intelligence can make decisions more fairly than they can

By Anna-Leigh Firth

Do you think using artificial intelligence in court – having computers and algorithms offer advice in bail and sentencing decisions, for instance – holds promise for eliminating bias from those decisions?

If you’re doubtful, you’re with the clear majority, 65 percent, of the 369 judges who participated in our Question of the Month emailed survey for January.

The consensus among the majority seemed to be that AI could be a useful tool for combatting bias, but it should never completely replace a judge’s discretion.

“Human intelligence has been underrated,” wrote one judge anonymously, as was most often with the 162 judges who left comments. The judge said it “requires more than an educated guess from a computer” to know, for instance, whether a defendant has a drug or alcohol addiction and if that person is truly engaged in treatment.

But another judge saw some promise for AI as a reminder or trigger to be on alert for bias.

“AI could inform the judge’s decision by suggesting average or probable outcomes based on historical evidence,” the judge wrote. “It should not replace the judge’s decision but might make one stop and think to reduce the impact of potential unconscious bias.”

Some judges raised the concern that an AI system could itself have biases built in because of the implicit biases of its programmers.

“Many of the programs contain a bias due to past decisions,” wrote John Johnson, a property tax judge in the United Kingdom. “This leads low-income defendants and people of color to be overrepresented in past arrests and criminal history given the reality of past justice system practice.”

Judges also raised these concerns:

  • In decisions on whether to grant bail, a computer program would likely miss a significant number of cases in which the defendant posed a threat to the public. What then?
  • Judges might become afraid to overrule the computer’s recommendation.
  • The underlying logic of an algorithm would need to be open to the public so it could be examined for bias (today’s commercial systems generally are not open source).
  • Voters have the ability to remove a judge who is not ruling fairly. The same could not be said of software.
  • An AI system would have to be programmed with a diverse staff and monitored to ensure that bias didn’t creep into its learning process.
  • No federal law sets standards or requires inspection of these tools, the way the FDA does with new drugs.

Among judges more hopeful about AI, one mentioned studies showing that AI can be as effective or more effective than judges at estimating flight risk in bail hearings, even when the judge has more information at hand than the machine.

Another judge wrote, “If Facebook and Amazon can figure out what I want to buy next, AI should be able to produce algorithms to reduce bias on a judge’s part….”

NJC News

Download a PDF of our 2024 & 2025 course lists

Download