While I am an advocate of algorithm driven processes in law, I still have a slightly queasy feeling about the whole idea. It comes from the fear that outcomes are not going to be “far,” but are going to be driven by limited and over-detirminative factors.
Recent medical research suggests that we may be hugely underestimating the benefits of protocols to increase fairness.
Specifically, a recent study at Johns Hopkins of checklists to drive prescription of blood thinners turned out to have far broader effects than expected. As the New York Times article, by Jessica Nordell, explains, once the study started:
Whenever a provider is admitting a patient to the hospital, a computerized checklist pops up onscreen. It asks if the patient has specific risk factors for blood clots, or for bleeding from blood thinning medication. Then the system offers a recommended treatment.
No surprise that the number of clots declined.
The Hopkins blood clot prevention checklist has been enormously successful — after the intervention, the incidents of potentially preventable blood clots in medical patients [in the study] dropped to zero.
But much more happened:
Haut is a trauma surgeon, not a bias expert, so gender disparities were the last thing on his mind when he and his team put together a computerized checklist that requires doctors to review blood clot prevention for every patient. “Our goal was not to improve care for men or women or whoever, it was to improve the care of everybody,” he said. But what they found was that after the introduction of the checklist, appropriate treatment for everyone spiked. And the gender disparity disappeared. (Emphasis added.)
Given the prior numbers, that is an enormous change.
At Hopkins, as at many hospitals, both men and women were receiving treatment at less than perfect rates, but while 31 percent of male trauma patients were failing to get proper clot prevention, for women, the rate was 45 percent. That means women were nearly 50 percent more likely to miss out on blood clot prevention.
It turns out that to be effective the use of the checklist must be universal and mandatory, and benefits from extensive consultation and training.
But for us in the legal system, this result suggests that if, for whatever reason, we move towards concrete and specific decision protocols, we might squeeze out much of the unintentional bias that pervades the system. As the article explains:
First, it disentangles the thinking that goes into a medical decision. Typically, clinicians aggregate relevant patient information and use their judgment to arrive at the best course of action. The Hopkins checklist disaggregates that decision into its constituent parts. In a sense, the Hopkins checklist puts the decision about blood clot prevention through a prism, separating out and clarifying the sub-decisions the way a prism separates white light into its rainbow colors. In illuminating each step, the checklist interrupts habitual biases, preventing them from corrupting the decision-making process.
Second, the checklist reduces reliance on human judgment. “The decision support tool makes it very cut and dry — the decision isn’t, ‘Hey, what do you think you should do?’ The decision is — click, click, click, here’s what the computer says to do,” Haut said.
That sounds like an almost perfect description of the problem in the legal system.
Indeed, one can imagine such an approach being used in triage, in intake, in caseflow allocation, in sentencing, parole and probation, in child support decisions, in domestic violence order conditions, in visitation decisions, in eviction extensions, etc. These are all areas in which extensive discretion is applied, and there is good reason to suspect various forms of pervasive unintentional bias.
Now, most of these systems may require facts to be “found,” so a risk of bias remains at that step in the process, and some fact finders might consciously or unconsciously tilt their “facts” to get the result they feel is fair, but overall you are ahead.
I wonder if there have been any studies of whether racial disparities are reduced after formula driven child support decisions are implemented. Or of what happens when domestic violence laws are changed so that findings trigger dispositions.
In any event, let’s remember three things. One, that algorithms may increase fairness, , two, that we can study results to make sure that that is the case, and, three, that moving to such systems may well mean that less heavily trained advocates can carry the process, since so much less effort has to go into appealing to a maybe spuriously neutral discretion.