Wayne Moore on Commentators Concerns about LSC’s Performance Management Strategy

This blog welcomes Wayne Moore’s thoughtful comments on the Comments submitted for LSC’s Draft Strategic Plan.

Wayne’s comments show that the worries raised by commentators have not been borne out with respect to the similar prior Performance Criteria for Legal Aid.  The proposed Standards would merely build on the criteria.  Wayne also demonstrates the power of metrics and rewards to give meaning to those Standards.  His full comments follow:

++++++++++++++++++++++++++++++++++++

The performance management section of LSC’s strategic plan is the most controversial portion of the plan. This section proposes three distinct activities, namely the establishment of:

  • Standards for effectiveness, efficiency and quality,
  • National metrics including case outcomes and efficiency measurements, and
  • Rewards for good performance as identified by these standards and metrics and penalties for failure to correct identified poor performance

This proposal raised concerns, expressed in the comments submitted to LSC that these activities as proposed by the Draft did adequately take into account:

  • Unique mixture of local case priorities and the types of clients served by each LSC grantee
  • Unique cultural, ethnic and legal environment in which they each operate
  • Role that each grantee plays in the continuum of services provided in each state
  • Fact that LSC provides a relatively small portion of total funding in many states
  • Diverse range of services provided by various grantees including assistance with matters, representation in cases, and various forms of impact advocacy; even the complexity of casework varies considerably from brief services to contested litigation and court appeals.
  • That they can conflict with local priorities and strategies
  • That they can cause services to be homogenous and favor the provision of briefer services
  • That metrics that can be used misleadingly by our enemies
  • That they would stifle grantee creativity
  • Fact that one size cannot fit all
  • That attempts to grade or rank grantees can be counterproductive
  • That collecting the necessary data for metrics will be burdensome

However, these concerns raised in the comments demonstrate a misunderstanding of the role that standards, metrics and rewards/penalties play in performance management. While the language used in the plan is confusing, standards are the same as performance criteria and LSC is proposing that three new criteria be added to ones that currently exist, namely criteria for efficiency, effectiveness and quality. Commenters were uniformly positive about the usefulness and influence of the existing criteria. They have been used for managing performance by grantees, establishing policy by policy-making entities, and evaluating the performance of all types of legal services programs by other funding entities. There was no mention that the existing criteria raised any of the concerns listed above. Since LSC plans to use the same process for establishing the new criteria, including grantee input, there is no reason to believe that the new criteria will raise these concerns. Certainly everyone recognizes the importance of quality, effectiveness, and efficiency in program performance, regardless of the delivery systems used; case priorities; local strategies; creative methodologies; and cultural, ethnic and legal environment..

While metrics are a new frontier for legal services management, they are well established in both the profit and non-profit worlds. Other providers of legal services use them as well. Large law firms rely heavily on billable hours and prepaid legal services programs use billable hours, case types and case complexity (in the form of case closure codes) for performance management. At the program level, metrics are primarily used to spot potential performance problems so they can be corrected. Poor metrics are nearly always followed by an in depth inquiry to determine if a problem actually exists. Without metrics, some problems would never be discovered. In fact, the best feature of metrics is that they allow the calculation of national averages which grantees can use to discover potential problems. It is very hard to judge efficiency and effectiveness in a vacuum. It is much easier to convince staff to change if one can show what other similar programs have accomplished. For example, if one office of a prepaid provider has better metrics than another, an inquiry is made to determine why. Sometimes the differences are due to different client types or even different laws and court procedures, which is fine. But sometimes they are due to correctable deficiencies such as the use of document generators by one and not the other.

Rewards and penalties must be established, because currently grantees have considerable discretion in deciding whether to adopt the recommendations of evaluators, unless the issue concerns compliance with LSC laws and regulations or the failure to have proper financial controls. One reason for this discretion is that evaluators are good at discovering problems, but often do not have time to develop the best solutions. Also there is no way to currently assess the effect that the problems have on overall program performance. Metrics will allow evaluators to make these assessments. Consider two programs that handle CSR cases of comparable complexity. Suppose one program handles, on average, 100 more cases per advocate than the other (where the ratio is based on annual CSR case closures and FTE advocates devoted to CSR cases). It may turn out that the program with poorer metrics serves many more limited English speaking clients and has to deal with more complex court procedures. Then, the differences are understandable. But, if the reason is that the program has an inefficient intake system, does not resolve most advice-only cases during the client’s first contact with the program, does not use document generators, spends an average of 21 hours on uncontested court cases, and/or doesn’t use total billable hours or hours per case to monitor efficiency and productiveness, then this is not acceptable. Furthermore refusal to correct the problems is also unacceptable because of the resulting severe reduction in services.

Let’s discuss each of the three components in more detail

Standards for quality, effectiveness and efficiency

As mentioned in some of the comments, performance criteria already exist for quality based in part on the ABA Quality Standards for the Provision of Civil Legal Aid. They simply need to be updated. For example, a host of new delivery systems have been developed (pro bono, hotlines, assisted self-help) and programs need to establish different quality control mechanisms that are appropriate for each delivery system. Also most programs have websites for clients and need to ensure they are written at the proper reading levels; other websites exist that measure reading levels.

While there is some mention of effectiveness and efficiency in the current criteria, these issues are important enough to warrant their own criteria. One reason that the existing criteria are so successful is that they are accompanied by questions that managers and evaluators can use to determine whether the criteria are being meet. The range of questions is broad enough to address programs with various case priorities, clients types, cases complexities, local strategies, etc. In my article, which is attached to my comments on the strategic plan, I offer a sample of some of the questions that can be used for the effectiveness and efficiency criteria.

Metrics for quality and efficiency

Quality Metrics

The main change concerning quality is the reporting of case outcomes. Only by knowing these outcomes, does a program know whether it is providing quality services and making a difference in clients’ lives. For example, different hospitals handle a wide range of patient types, illnesses, and service complexities (e.g. surgeries) and operate in various practice environments and neighborhoods. They strive to adopt best practice protocols, use highly trained and experienced staff, hire good managers and employ state of the art technologies. But this would never be an excuse to forgo the measurement of patient outcomes, regardless of how burdensome it is. Insurance companies have developed nationally applicable outcome codes for hospitals, but hospitals can always supplement them with their own, if desired. These patient outcomes are now provided to consumers so that they can make informed decisions about which providers to use. There is no reason why legal services should be treated differently. In fact, their burden can be reduced by allowing programs to measure a statistically valid sample of their advice and limited action cases. They already know the outcomes of their other cases, but may not code them. Some programs have collected outcomes since 1979[1]; there is considerable experience in using outcomes and some states have implemented outcome reporting for all providers in the state (NY, VA). Most programs use outcome codes developed in other states as their starting point and modify them slightly based on state-specific needs. Similarly all programs can use the LSC outcome codes and supplement them as needed. There are published accounts of programs that did not collect outcomes only to discover later that the outcomes were substandard[2].  This should not happen.

 Efficiency Metrics

In my article mentioned above, I suggest different metrics for measuring efficiency. Multiple metrics are useful as each one is designed to uncover different types of problems. For example, I recommend: 1) ratio of managers to non-manager staff attorneys and paralegals (this metric is used by government law offices[3]) and 2) percentage of staff other than attorneys and paralegals – to identify staffing inefficiencies. I recommend 1) CSR case closures per CSR advocate and 2) percentage of cases closed by negotiation and agency and contested court decisions – to compare the efficiency of programs that handle cases with comparable complexity. These are merely starting points for further inquiry. One program may have a higher manager to advocate ratio because managers also supervise volunteer attorneys. Another may simply be inefficiently staffed. One program may have a fewer case closures per advocate because LEP Asians and Hispanics comprise 60 percent of their clientele. Another may have inefficient intake, fail to monitor time spent on cases, etc. In fact, one purpose of metrics is to discover best practices that can be shared with others.

The whole purpose of the further inquiry is to determine if the reason is one of the concerns listed above or performance problems. In this way, one system does fit all as it does with hospitals, large law firms, prepaid programs and the current performance criteria. Some prepaid programs collect the average billable hours spent on every combination of case type and closure code for every advocate and office[4]. If one office spends more time on advice cases involving child custody, this is investigated during regular evaluations by headquarters. This is not as difficult as it seems as most cases fall within a dozen or so case types and a half dozen case closure reasons, just as they do with legal services programs.  

Effectiveness metrics

I also believe there should be effectiveness metrics. As commentators have pointed out, programs provide a wide range of services. Most of the metrics I suggest for efficiency pertain to casework. But I believe there should be separate metrics for impact advocacy, client education and materials and even what LSC calls “matters”. LSC developed a set of metrics for measuring impact advocacy for its Delivery System Study in the 1970s. The purpose was to compare the performance of various delivery models. It developed impact scores by assigning scores to each of the program’s impact efforts using expert judges. The same judges scored all the programs and divided the scores by the programs’ annual budgets. The scores were not intended to be precise but to separate programs into tiers with high scoring programs in the top tier and low scoring ones in the bottom tier. The staff delivery model was validated, in part, because of its high impact scores compared to Judicare and clinics. Prepaid programs scored low, but were not paid to engage in impact advocacy (a number of prepaid providers are basically staff attorney models like legal services). The same method could be used to score current grantees. Also some grantees close far more uncontested court decisions than contested ones. Isn’t this a legitimate indicator of effectiveness, although there may be legitimate reasons for it. Some programs close far more appeals and contested court litigation cases per advocate than others. Aren’t these the cases most likely to impact people beyond the individual litigants? Is this not an indicator of effectiveness? In some programs, cases closed by negotiation and contested court and agency decisions comprise less than 6 percent of their total caseload, while in others the percentage is over 50. Sometimes this is because the first provider is essentially a hotline, but in others the program is the only full service provider in the services area. The latter situation might be result of a local board decision. But if the board found out that other providers closed a comparable number of cases per advocate, and handled more complex cases, the board may want to change its decision.

Process

There is some concern that the process for implementing the performance management system will be costly and require substantial support for the grantees. But, this is not true. I envision the following straight forward process

  • The new standards could be developed using the same process that was used for the current performance criteria and be comprised of general concepts followed by a wide range of relevant questions. These criteria would automatically be incorporated into LSC’s evaluation system as it is based on the performance criteria. LSC evaluators would have to be trained to use the new criteria and metrics.
  • The development of outcome codes could primarily be based the experience of others and later modified as needed, just as LSC’s case closure codes were.
  • Efficiency and effectiveness metrics can be drawn from the ones I recommend, those used by prepaid programs and/or those recommended by experts in the field. The point is that they are not grades or conclusive proof of the existence of performance problems. They are simply tools for spotting problems that may otherwise go undetected. Sometimes the difference in metrics just reflects a program’s unique circumstances or decisions made by its board. But, sometimes it is caused by verifiable performance problems. The more tools you have, the better able you are to cover all aspects of program performance. They also can help measure the impact that these problems have on overall program performance. Finally, they allow LSC evaluators to avoid the awkward position of recommending specific solutions to detected problems that the grantee may oppose. So long as the grantee addresses the problem areas, LSC can rely on improvements in the metrics as indicators that corrective action has been taken.

All the metrics I propose use information already collected by grantees such as billable hours spent on CSR cases, case closure statistics and codes, and case type codes. The number of FTE attorneys and paralegals devoted to CSR cases would have to be determined, but most other funders require information about the number of staff devoted to their grant activities. The number of billable hours each advocate spends on client services other than cases would have to be collected to calculate total billable hours, but most programs already do this to ensure sufficient resources are devoted to impact advocacy, etc.

Once LSC selects the metrics, its case reporting manual would have to be updated, but substantial program support would not be required.

  • The rewards and penalties can simply be drawn from the ones proposed or ones suggested by others and modified as necessary based on experience. For example, there is merit to the suggestion that LSC allocate funds to help programs correct identified problems like the lack of document generators.

On a final note, any adverse information can be misused by the enemies of legal services to bolster their opposition. But what is even more dangerous is for the adverse information to be discovered first by the enemies. Then, they can bolster their opposition with both the problems and the fact that LSC and the grantee were clueless about them. It is much better for the LSC to say it is aware of the problems and is trying to fix them. Hiding the truth is never a good policy.   

 NOTES:

[1] Legal Aid Society of Greater Cincinnati.

[2] Gabriel Hammond, Tides of Change: Access to Justice Programs in Hawaii, Management Information Exchange Journal 47, 49-50 (Summer 2000); Ross Dolloff, Committing to a Broad Range of Strategies That Work, undated (unpublished manuscript, on file with the author).

[3]See James Wilber, Altman Weil, Best Practices of City and County Civil Law Offices at http://www.altmanweil.com/dir_docs/resource/b0541231-be60-491b-96ab-c6f1d5e1b4c5_document.pdf

[4] UAW Legal Services Plan.

Advertisement

About richardzorza

I am deeply involved in access to justice and the patient voice movement.
This entry was posted in LSC, Metrics and tagged . Bookmark the permalink.