Towards A Principles-Driven Approach to Algorithm-Based Decision-Making in the Justice System

A very recent article by Adam Liptak in the New York Times draws attention to the actual practice in Wisconsin of using algorithm-driven predictions in sentencing, and to the pending attempt to otbain review of the practice by the US Supreme Court

In March, in a signal that the justices were intrigued by Mr. Loomis’s case, they asked the federal government to file a friend-of-the-court brief offering its views on whether the court should hear his appeal.

The report in Mr. Loomis’s case was produced by a product called Compas, sold by Northpointe Inc. It included a series of bar charts that assessed the risk that Mr. Loomis would commit more crimes.

The Compas report, a prosecutor told the trial judge, showed “a high risk of violence, high risk of recidivism, high pretrial risk.” The judge agreed, telling Mr. Loomis that “you’re identified, through the Compas assessment, as an individual who is a high risk to the community.”

The Wisconsin Supreme Court ruled against Mr. Loomis. The report added valuable information, it said, and Mr. Loomis would have gotten the same sentence based solely on the usual factors, including his crime — fleeing the police in a car — and his criminal history.

At the same time, the court seemed uneasy with using a secret algorithm to send a man to prison. Justice Ann Walsh Bradley, writing for the court, discussed, for instance, a report from ProPublica about Compas that concluded that black defendants in Broward County, Fla., “were far more likely than white defendants to be incorrectly judged to be at a higher rate of recidivism.”

There are so many issues bundled in here.

There is the issue of the use of algorithms at all in the making of predictions.  This is an issue of accuracy, fairness and legitimacy.

There is the issue of transparency.  The idea of not knowing the algorithm’s factors and logic seems bizarre, particularly when defended in commercial terms.  There is the issue of powerlessness of defendants and others somehow having no control of the fact process.

Finally, there is the deeply disturbing issue of embedded bias, which may be impossible to correct for.  I will deal with the embedded bias issue in more detail in a future post.

Firstly, as to the use of algorithms in making predictions there is significant evidence that they increase accuracy and fairness.  To be specific, studies have shown that algorithm productions and decisions can be more reliable and less prone to bias than human predictions and decisions.

In this research, statistical methods applied to Terry stops showed that cops using a very simple algorithm tool would make far fewer nonproductive stops than those relying on their fast intuition. To be specific:

Remarkably, only 6 percent of stops are needed to recover 50 percent of
weapons found under the usual stop-and-frisk policy, and only 58 percent are necessary to recover 90 percent of weapons.
Moreover, to no one’s surprise:
Statistical risk assessments offer an alternative, intriguing possibility for directly deter-mining whether stops are justified. Namely, one can use a predictive model to summarize the available information in terms of the likelihood
of stop success, and then interpret “reasonable suspicion” to mean this
ex ante likelihood is suitably high(above, say, 1 percent). Taking this
approach, we find that 43 percent of CPW stops had less than a 1 percent chance of turning up a weapon. Moreover, we find striking racial dis-parities. Whereas 49 percent of blacks stopped under suspicion of CPW had less than a 1 percentchance of in fact possessing a weapon, the cor-responding fraction for Hispanics is 34 percent, and is just 19 percent for stopped whites.

Secondly, as to transparency, let me describe what we did on this front at the Midtown Community Court. When a judge asked us if we could develop an algorithm predicting compliance with alternative sanctions, some of us demurred, not because of its technical difficulty, but because of the fear of people, in effect, being sentenced based on the non-compliance of others.  Then the judge said something that will echo with me for the rest of my life: “I just do not want to set people up for failure.”

Ultimately we built a system with three major features: 1. the probabilities were based on actual data and factors shown by regression analysis to be critical; 2. the factors impact were shown in histograms so that these factors could become part of the conversation.  Counsel might for example, point out that while a defendant did not have a formal address, he did have a place to live.  In such a case, counsel would ask the judge to change the homelessness setting.  Then, you could literally watch the histograms bounce around to show the new compliance projections.  Finally, we gave the judges compliance support tools, enabling them, for example, to order reminder phone calls to the defendant.

The conclusion I draw from this is that transparency, and indeed then then enabled  discussion, is critical to the effectiveness and legitimacy of these tools.  Proprietary commercial interests can be no excuse for secret government.  Moreover, confidentiality of algorithms is not necessarily required for protection of intellectual property, the law can protect such interests without secrecy.  Most patents are public.

Finally, as to embedded bias, let me now in this post just note how many deeply entangled levels such inevitably has, preserving and projecting into the future, the harms of the past.  The question is whether such embeds are better or worse that the bias of individualized discretion.

In a future posting, I will attempt to lay out some principles that should be followed in developing and using such predictive algorithms in the justice system.


About richardzorza

I am deeply involved in access to justice and the patient voice movement.
This entry was posted in Access to Justice Generally, Criminal Law, Discrimination, Research and Evalation, Security, Technology, Transparency. Bookmark the permalink.

2 Responses to Towards A Principles-Driven Approach to Algorithm-Based Decision-Making in the Justice System

  1. Claudia Johnson says:

    Looking forward to the follow up on it!
    We should not accept what is less bad–in term of comparing the inclusion of implicit bias into these tools and algorithms vs. individualized implicit bias. Both anathemas to our belief and vision of a free, democracy where we all are equal. We should strive to create tools and systems where people are not judged by the color of their skin, their accent, or country of origin, class, disability, religion, gender identity, but by their unique circumstances and abilities. I would like to hear how courts that are buying this type of software to evaluate parole release address the nature of implicit bias in the data sets used to create the tools and how they are working w/the vendors to eliminate implicit bias (how do they validate the decisions they make based on those software tools? what does their research show?)

  2. Jim Greiner says:

    Hi, Richard, thanks for this. As you know, this is an issue that the Access to Justice Lab is active on, see our launch announcement of an RCT in this area here:

    The Loomis decision from Wisconsin (ironically, the site of the RCT that we just launched!) is a head-scratcher.
    * If a factor (here, risk assessment scores) can never make a determinative difference in any decision (here, sentencing), then by definition that factor must be irrelevant to the decision. So judges are allowed risk assessments as long as they are irrelevant?
    * Secret bases for legal decision making is OK. You have this one covered above.

    A key here is babies and bathwater. If risk assessment scores are transparent and they improve decision making, we should use them. If they are secret or they don’t improve decision making, we shouldn’t.

    As for those who think that risk assessment scores are bad because they interfere with individualized decision making in criminal law: the tooth fairy isn’t real. Neither is Santa Claus. Despite this,, gnomes don’t steal underwear in the night. I can’t remember when we last had individualized decision making in non-capital criminal cases. As in, I’m not old enough to remember.

Comments are closed.