The more I think and talk with folks about the proposed single measure for access to justice, the more I think we might be on to something that could tell us what we want to know, would allow for comparison of innovations and even of the state of access in a wide variety of contexts, procedural systems, and cases.
The measure, to put it simply, would be:
The percentage of people for whom the facts and law are sufficiently before the decision-maker that the case can be decided on the facts and the law.
For steps prior to completion of the consideration by the decision-maker:
The percentage of people for whom, assuming the remaining steps function appropriately, the previously completed steps have been such that the case will ultimately, if completed, have had the facts and law sufficiently before the decision-maker that the case will have been decided on the facts and the law.
The primary method for calculating this score would be to interview the person and come up with a yes/no assessment. That interview, remember, would be very like a normal intake interview conducted by an attorney, exploring the same areas, and with the same thoroughness.
So, for cases decided after hearing, the review would be whether the decision-maker actually heard the evidence needed to make an appropriate decision. For cases decided without hearing, the review would be of whatever paperwork (or tech equivalent) had been gathered.
For measuring the ATJ score of earlier steps, the reviewer would compare the information that had been gathered, with that needed to provide a sufficient chance that the information would ultimately be gathered. (This is slightly flexible, because it might be, for example, that judges in one court were known to be really good at exploring matters. In such a court, lack of completeness in forms would have less on an impact on the scoring.)
Notwithstanding this, the forms example helps show how powerful this measure could be for assessing innovations and changes. A new form draft would be tested by being given to half of a group, and comparison of the scores for those using the new form and the old one would be a simple measure of its impact.
Indeed, in this example, one could measure both the actual form alone, and the access score of the combination of the form and the hearing. The comparison of the after hearing scores would be a measure on the impact on access a whole, and therefore ultimately more important. (The study protocol would require that different test and control groups would be needed, because the research interview would distort subsequent results.) The post-form research result would be completed more quickly, and could be used in the ongoing development process.
The measure is intentionally simple, and as such runs risks, fixable actually, of lack of consistency. But the simplicity is a very important feature and researchers have many methods for ensuring consistency among reviewers. Moreover, the simple yes/no conclusion could be supplemented or even replaced by a more precise assessment tools, but all leading to that final encasement.
What this process does not measure is the percentage of people with justiciable issues who in fact do seek redress and get into the funnel above. Nor does it measure the “justice” of the ultimate result, or ultimate compliance. But the point is that these will need to be measured differently,and then all three integrated.
p.s. I am sure that a process like this would provide very useful information to use in improving judicial education.
Hi, Richard, I did some thinking on this. As Claudia suggests, this proposal is aimed at one purpose of A2J, namely, to attempt to induce the decision maker to render the legally correct adjudicatory output. There are other purposes, such as procedural justice purposes, that focus not all all on adjudicatory outputs, and for which the nature of the information before the decision maker is irrelevant. With respect to affecting adjudicatory outputs, there are other ways to attempt to measure whether that purpose is being served well, such as the one that Gerry proposes (the both-sides-have-a-competent-lawyer test). Gerry’s proposal is best put into practice via a randomized study, as I tried to do in the District Court housing study in Massachusetts. RCTs are expensive and require the cooperation of the Bar, which is hostile to rigorous empiricism. Another potential issue with the both-sides-have-a-competent lawyer is that it is a blunt measurement. If there is a problem, one does not know where the problem arises. Is it the decision maker? Is it the information that the decision maker perceives? I’ve encountered some court systems where my guess would be one (the courts are getting garbage from one or both sides) and some where it would be the other (the courts are lawless).
Your measure has the benefit of not requiring Bar participation, and it isolates one aspect of the adjudicatory process, which is the information received. For it to work, we have to know pretty precisely what information an adjudicator needs, and we have to think that we can elicit that information via a (single?) litigant interview. Large portions of the legal system (e.g., discovery, fact investigation by lawyers and assistants to lawyers) are set up on the premise that such is not possible. I actually think it is possible in a large percentage of a large number of case types. Regardless of whether they admit it, legal aid attorneys have commoditized not just intake but also the facts they focus on, the legal arguments they make, and all other aspects of the litigation process. So I like your measure for what it can tell us.
As you can probably tell from the above, I’m not a fan of an effort to produce a single measure for access to justice. To measure whether something is good or bad, we have to know what that thing is for, i.e., its purpose. When a thing (here, access to justice) has multiple purposes, attempting a single measure is a fool’s errand. Is the peanut a good food? It’s not just that a peanut is a healthy source of nutrients for some people and deadly poison for those allergic to it. It’s that I can’t tell you whether a peanut is “good” until you tell me what purpose you want the peanut to serve.
Claudia’s comments resonate with me. Measuring justice by whether the court has sufficient facts to make a decision assesses the process from the point view of the court. It is also an engineering/analytical/left-brained perspective — that there is a right answer to a legal problem, and if you have the right ingredients (facts) you get to the right answer. Justice is much more nuanced. The proposal also assumes that the process is supposed to be adversarial and cases that do not come before a trier of fact are truncated. Let’s develop a measure that envisions justice as a conversation by which we define who we are as a polity. Justice as a measure of how dynamic we are as a society – fluid, organic, interactional.
Quick response to the very helpful comments. I think that there is no need for a participation arm, because that is one of the forces that impact whether facts and law are adequately before the decision maker. No way the facts and the law get there without participation, surely.
As to measuring “justice” of result, I meant to suggest at the bottom of the post that that should be measured too, as should the extent to which people know of the remedy.
Para below is from my post:
“What this process does not measure is the percentage of people with justiciable issues who in fact do seek redress and get into the funnel above. Nor does it measure the “justice” of the ultimate result, or ultimate compliance. But the point is that these will need to be measured differently,and then all three integrated.”
Thanks so much for contributing to the debate. Responses? What am I missing?
I want to offer a different candidate for a single measure (perhaps just to see it shot down so I can stop thinking about it). There are several problems with “the facts and law are sufficiently before the decision-maker.” For example, most cases don’t get that far. In addition, as Claudia’s comment suggests, the element of customer choice seems to be absent. Third, as you note, the measure is employed before “justice” is reached.
We have an adversarial system. One path we could take is to change at least some categories of cases to inquisitional — skip the adversarial jousting to test the facts. But assuming we stay adversarial, perhaps we should include that feature in any measure. Hence, let’s measure whether the outcome is “just” or, if we are testing the benefits of change, whether the outcome is more like the outcome that would be predicted if each party were represented by a competent and zealous lawyer.
Advantages: it assesses the system based on the results, it can include customers who never participate or that drop out at some point,, it includes judicial and administrator behavior in the final stages of the case, and it makes room for “illogical” or “personal” choices by consumers. Finally, it can be applied to a far larger share of the cases.
Is it possible to apply such a measure? Good question (about both of the proposed measures.) In the 1980s the ABA’s Delivery of Legal Services Committee ran a study of three compensated delivery systems in San Antonio. It used a panel of attorneys to review the legal work in the cases for quality of advocacy (not the same thing as outcome). I think the assessments were sufficient to support comparisons of the legal work.
Problems? Cost, control groups, subjective measures,, and the interviews as interventions that spoiil the data about the outcome. But perhaps someone could figure those out.
I was following what you had posted in the prior post on this topic:
“what percentage of people could reasonably be expected to participate meaningfully and sufficiently in the process, as changed in comparison to before the change, and only with assistance from free and actually available resources. For there to be “sufficient meaningful participation” it would be necessary that the case has been sufficiently presented that the decision-maker was able to accurately make a decision on the facts and the law.”
Now–in this post, I got lost.
“The percentage of people for whom the facts and law are sufficiently before the decision-maker that the case can be decided on the facts and the law.”
This took out whether the person was able to meaningfully participate in the decision making process. This new definition is very much centered on the decision maker and not the person who needs a fair, open, accurate decision–that does not include the bias or the system or laws toward those with lawyers and means, or the implicit bias of the judge.
How about the definition should have two prongs (not to sound like a lawyer–sorry)
1. The percentage of people for whom the facts and law are sufficiently before the decision-maker that the case can be decided on the facts and the law, AND
2. The percentage of people who were able to meaninfully participate in the process of the decision with effective assistance in an affordable way.
#1 might require a reality check factor. The final decision should be doable abided by those ordered to follow it—so that if the person is sent to do something they can’t comply with (like go to training, therapy–but there are no services avaible for them, etc)–then the order is deemed to be not good.
#2 would require having metrics that define meaningful participation and affordable to that person–so not as easy as it sound and includes subjective metrics.
Putting both together might get us there.
Having said this–I have this theory/idea that a meassure of ATJ is not an index, or one metric, but rather a lattice. A lattice structure–in that it is a tri dimensional arrangements of molecules, atoms, ions etc (read different types of services, different types of lawyers, different types of areas of law, different types of forum or litigation posture, operating in different types of rules, resouce levels, public support, etc etc). If we could eventually model the lattice–that would be amazing!
That is an elegant measure! Isn’t that really what it is all supposed to come down to.