Outcome Measures #2: LSC Outcomes Measures, Good News, Bad News, and A Challenge

This is number 2 in an occasional series on outcome measures.  Number 1 reiterated how important it is to develop and apply such measures system wide.

There is good news and bad news on LSC outcome measures.

The good news: LSC is moving to require the collection and use of outcome measures, and has made available the promised Toolkit on outcomes.  The bad news: there are as yet no national outcome measures, meaning that programs can still design all their own measures, and the utility of the process and product is therefore severely limited.

LSC has told its grantees that 1) they must begin collecting outcomes information by June 1, 2016, 2) must for 2017 grant applications, confirm that they are collecting outcomes, and, 3) for 2018 grant applications, must also provide a narrative of how they are using outcomes information

The Toolkit, developed after a long process, which some of us has originally hoped would develop true national measures, includes listings of possible outcomes by substantive areas, two case studies (Cleveland Ohio and Virginia), and a resources link.  I understand that additional case studies are to be added.

By far the star of the Toolkit and site is the Cleveland Case Study.  The study explains the reasons for the project, the process that was followed internally to develop the outcomes  (including, critically,  extensive staff input) and process, details how the system operates, and lists lessons learned.  Very important was full integration with the overall case management system.  Staff are enthusiastic.

I would hope that screenshots like this one would prove to all the logic, simplicity, and ease of use of the Cleveland system.

CLAS-Consumer-Outcomes I would also hope that charts like these would sell everyone on the value of the system for showing outcomes.

CLAS-HEWII-Health-and-Safety-Strategic-Goal CLAS-Historical-Trend-Report This one, comparing client and advocate views of outcomes shows how useful such systems can be in improving quality — and why we must always include clients in the discussion of what measures to use, and then of actual outcomes.  Why, one asks, are the clients as approximately as positive, or more positive, about every outcome area except the one that deals with keeping the person safe in the home.  I would assume this is the Domestic Violence measure, and wonder if the clients are less hopeful because they know the scope of the ongoing risk.  The obvious suggestion would be to ask clients what could be done to improve this outcome.


Surely, the case is made.

The bad news, again, is that we still do not end up with a common and comparable outcomes across the system.  I will be honest that I fail to understand why.  I would hope that the illustrations above show the value and potential universality of the approach, and indeed its details.  The lack of comparable outcomes throughout the system makes it impossible to compare the value of innovations or to do the kind of preliminary program and state comparability review that could lead to the more detailed explorations that could then lead to quality and outcome improvements, or, indeed, to a demonstration of the apparent inequities in courts of certain states.

While there is now funding to develop e-training materials on the toolkit (top listing), I very much hope the community will move aggressively to deploy outcomes measures nationally.  (Side Note:  Huge credit to LSC for getting out there and bringing in outside funding to support innovations and improvements, as listed in the above link.  This was a potential too long ignored, largely for fear of reducing local fundraising opportunities.  The truth is obviously the opposite.)

Indeed, with respect to the outcome measures, I offer a specific suggestion and a challenge.  Why not simply take the Cleveland System as the presumptive standard, make the changes necessary to have it work nationally, and then require it to be used?  Maybe the group that worked on the process tried this approach, and it failed.  Then perhaps the larger access to justice community should get an explanation of why this failed.

Similarly, I would offer the same challenge to every legal aid advocacy program, every IOLTA grant maker,  and every state access to justice commission: take a look at the Cleveland model, and see how, with modifications, if really needed, it can be adopted by your program or community.  The burden of justification of any “no” is on you.  If you do feel you need to make a change, minimize it, try to maintain full consistency, and remember the benefits of full comparability.

These are important steps forward.


About richardzorza

I am deeply involved in access to justice and the patient voice movement.
This entry was posted in Access to Justice Boards, IOLTA, Legal Aid, LSC, Metrics, Outcome Measures, Research and Evalation, Series: Outcome Measures. Bookmark the permalink.