Nate’s 2016 polls-only prediction is that Clinton has an 77.6% chance of winning.
For those of us who talk about how triage is critical, his methodology page is well worth some attention. It is highly sophisticated, and reminds us just how complex predictions are, particularly when you want to make not only general policy choices, but also individual choices based on them.
And that, of course, is exactly what the medical system does, and has to do, all the time. Moreover, it is just what the legal system does all the time, usually without the benefit of statistics. This is not only true in probation and sentencing decisions, but also in the often unconscious triage decisions made day after day in self-help centers, legal aid programs, and clerk’s offices.
So, I ask, can we start to develop a model for when we can or can not rely on a prediction. In the triage context, the way to ask the question is “when are we certain enough that a more expensive assistance model is worth it?” Note that this at the same time a simpler and more complex question than whether it would make a difference, or even how much.
Of course, in all social science the “confidence” level is calculated all the time, and thresholds are used to justify group decisions. But we are constitutionally (and I mean the phrase only metaphorically here, I think) deeply inhibited from saying, “well, since 95% of the time, that service a will be sufficient, we are just going to use it for everybody in that group.” Although remember how low the standard for effective assistance of counsel is in both the criminal and civil contexts.
The problem with that reluctance, at least when there are major cost differences and significant resource restraints, is that the result is that while a few people get an even higher chance of success, the vast majority get less of a chance. The third world medical analogy is exact. Countries could spend all their limited resources on a few people, and do nothing for the many, or they could spend it all evenly, or they could muddle though with an informal and non-transparent allocation system, which is what I expect they do. (I feel acutely the benefit I get from this advantage. My hemoglobin target is now 9 (and the subject of potential upward negotiation). I understand that in India you only get a transfusion at 6. At 6, I simply would not be blogging at all.
So, I would say the following. We need our decision-making systems to be transparnt. We need any algorithms to be legitimate and to reflect the best knowledge. We need to be honest about any uncertainty. We need to provide a method for review of decisions to increase both accuracy and legitimacy. We need ongoing review of protocols and algorithms. We must keep a human element or option, even as we apply rules electronically.
Perhaps most importantly of all, we must educate the public and decision-makers about the relationship of the resource limitations to the choices that are being made, and the broader interests at stake. We can learn from the medical system public policy research which has shown that members of the public think very differently about things like the use of antibiotics, when they are shown the long term public benefits of more restrictive polities. I.e. they are willing to be given less themselves, if it keeps the world as a whole safer (“Evidence of physical harm to individuals or the community led to increased acceptance of limits.”. Interestingly, the public accepts the role of the professionals as general arbiters. So, hopefully, people might be willing to get less help if they knew that resulted in more justice overall. This, of course, is fully consistent with the “public trust and confidence” research that people care more about the overall fairness of the process than that they themselves win..
I agree 100% that any system has to be transparent to the user. The user will need to know, in clear concrete language why they are ranked the way they are. In addition the system will need to adjusted routinely–to make sure that once outcomes data comes in, the method/calculus by which decisions are made–reflect reality–not assumptions based on implicit bias.
This question: “when are we certain enough that a more expensive assistance model is worth it?” Is the one that is hardest to answer. Worth it for who? For the:
*organizational interest involoved,
* for the funders,
*for the group with most power or who manages the funding?
* for the individual,
*for the legal non profits?
*For the lawyer?
*For the non lawyer assistant?
Given our long history of discrimination and prejudice against women, minorities, people from other countries, religious groups, and disabled communities–implicit bias will creep into the value part of “worth it”.
Once something based on biased assumptions about “the other” (SRLs, court users, low income people, immigrants, minorities, women) it takes decades, if not centureis, to first acknowledge the bias, and then fix it. The biased system becomes the norm–and then we are back again creating, funding, a system that does not promote equal justice. We need to be pro active about not doing this–and transparency to the end users will be the first step. The second, constant iteration based on data and feedback. The third, ensure that who ever designs the system creates a diverse group of advisors–so there can from the get go a perspective of equality included in the design. An unrepresentative group will only stay in their “group think”.