While there is now general agreement that outcome measures for civil advocacy legal aid are a good idea, and that measures need to be different in different areas of substantive advocacy, it is apparently hard to get agreement on common measures that can be used in different jurisdictions.
So, I offer a challenge to those working on this and to the advocacy legal aid community:
Either show why the measures in the Massachusetts eviction studies are so bad that they should not be used, offer better ones, or try these as a national outcome measure in eviction cases, so we can learn how they might be improved for this purpose.
The measures were developed by Harvard’s Jim Greiner and a variety of stakeholders for this paper. They have been the subject of extensive debate and analysis. The paper abstract gives both the general methodology and the conclusion.
At least for the clientele involved in this District Court Study – a clientele recruited and chosen by the legal aid provider’s proactive, timely, specific, and selective outreach and intake system – an offer of full representation mattered. Approximately two-thirds of occupants in the treated group, versus about one-third of occupants in the control group, retained possession of their units at the end of litigation. Using a conservative proxy for financial consequences, and based on a subset of cases in which financial issues were at the forefront, treated-group occupants received payments or rent waivers worth on average a net of 9.4 months of rent per case, versus 1.9 months of rent per case in the control group. Both results were statistically significant.
(outcome related to burden on court omitted)
In simple terms the two measures are retention of possession and value of payments or rent waivers. The methodology for these two outcome measures is described in much more detail it the full paper here.
I do not think that anyone would seriously argue against the proposition that these are valid and valuable measures of the impact of legal aid advocacy — which is not to say that they are the only ones, or that they can not be improved.
Now, obviously, if one applied these measures across the country, one would discover that there are huge differences, certainly between states, almost definitely within states, and very possibly between programs within units and kinds of locations within states. Equally obviously, the first question will be to seek explanations for those differences, particularly which of the differences can be attributed to differences in the law, differences in court culture, differences in legal aid programs advocacy, and difference in intake policies.
Surely that is a critical debate, one that should have huge implications for access to justice commissions as they look at the whole system, to courts as they consider their structures, to advocacy legal aid boards and management as they work to improve their quality, and challenge courts to improve their structures. Depriving ourselves of the data to have this debate is, simply put, crippling for access to justice.
Perhaps the unwillingness, so far, to accept such common studies comes from the force of the argument that every defect in a measure has to be perfect before it is safe to use them. That’s the wrong test. The right question is whether a proposed measure is so bad that it can not be used. (For some history of the discussion of outcomes measures in legal aid, see here.)
I think that the medical analogy is useful. For many years the 5 year survival rate was used as the measure of success of cancer treatment. More specifically, mean survival after treatment began was often used. Now the strong trend is to report both actual survival and an “is is worth it to the patient” test, called generally “health-related quality of life.” As one paper put it as far back as 2001,
Over the last decade, clinicians have accepted that while survival and disease-free survival are critical factors for cancer patients, overall quality-of-life is fundamental. This review considers recent developments in the field of quality of life, oncological challenges and future directions.
For a more recent (2011) study of such outcome measures, see here. Here is the report of the development of a disease specific quality of life measure (analogous to a substantive area specific outcome) that might be methodologically helpful to us, since the patients helped develop the measures (hmm).
The point being that over time we can improve the measures, but only if we get started and see what happens with imperfect or incomplete ones.
So, I repeat the challenge, tell us why the Greiner measures are too bad to use, give us a better set, or try these and see.