Mixed Thoughts on Arguments Against Comparability of Data and Goal Setting

A few days ago, I blogged about recent massive reductions in heart attack deaths, and the contribution that the establishment of required reporting of comparable data, and the setting of clear goals, contributed to those improvements.

While doing so, I planned to include an ironic listing of the kinds of arguments likely to be or to have been used in both legal and medical fields to argue against this powerful innovation approach.

But then I had my doubts, so I polled some folks I respect with a draft, and got very mixed responses, from “go with it — hilarious” to “better coming from someone else.”  It was urged that  “the tasks are difficult on the merits,” and that my ironic arguments caricatured as resistance to change might in fact be legitimate as far as they went.  It was aruged that what is needed now is thought leadership in developing indicators, rather than criticism for the failure to do so, so that we can transcend the reasonable resistances.  And, indeed, it is true that doing this right is hard.

It set me thinking, and I realized that I wanted so badly to go with the list for not entirely healthy reasons.  I have just sat in too many meetings over the years in which data gathering and particularly comparative data gathering has been argued against.  This includes one Summit in the mid 80’s and one in the early 2000s.  In the first case I was probably arguing for caution, in the second less so.  In the second, the facilitator asked something like “is there any way that we could set this up so people would be able to go with this,” and one person replied “no.”

So there is a history there.  Perhaps the most throughout comments I got back on my proposed draft was something like.  “Do not fight the last war.”

I would like to think that that is the case, and that with the rapidly growing commitment to research, we can focus on how to ensure true outcome measures and true comparability.  I particularly hope that in the intermediate time period, in which outcome measures are  not yet fully comparable, we find ways to make them effectively so (such as by looking at changes in different programs outcomes over time, and seeing who is most effective at improving theirs.

In which spirit, I offer my initial ironic thoughts on how people might have argued against the hugely impactful heart attack studies.  (See for example, an argument here that “the impact for patients was unclear because of heterogeneity in presentation and severity of illness for unselected admissions, and challenges in the definition of ‘specialist’ relative to individual clinical need, [commenting on time to specialist measure].”

“If we produce data it will be used by our opponents.”

“If we produce data our funders will use it to reduce our budget, or reduce our unit payments.”

“An approach like this will mean that doctors will stop trying to help their patients.”

“Doctors and hospitals will stop treating the sicker people.”

“Its too much of an administrative burden.”

“We report too much to funders already.”

“Different kinds of procedures are just so different that you can not compare outcomes.”

“Survival is not the only important measure.”

“Its not fair to judge doctors and hospitals because of failures in other parts of the system.”

Sometimes patients are to blame for not following orders.”

My favorite, however, remains:

“Every heart is different.”

I have to admit that I too have been fearful of the impact of data on the credibility of directions that I believed were worth following. It is hard not to become vested in such directions, and harder still to balance the risks against the benefits.

I guess the moral is while these are complex matters, we need to be careful not to let anxiety about downsides blind us to the huge advantages of data in general and outcome comparison in particular.   In the end, it is only be all committing to a data driven culture that we will built an access to justice system strong enough to take those risks in its stride because the general case has been so strongly made — by the data itself.

Ideas on how to design an appropriate system of outcome comparison as welcome in the comments.  (Indeed, one of people who reviewed the drag blog pointed out that “nor are you approaching [the issue] in an alternative way, would be by articulating proposed measures.”)  It will surely be helpful to this discussion when the LSC-led process on improving data gathering publishes its results and recommendations.

Advertisements

About richardzorza

I am deeply involved in access to justice and the patient voice movement.
This entry was posted in LSC, Research and Evalation. Bookmark the permalink.

7 Responses to Mixed Thoughts on Arguments Against Comparability of Data and Goal Setting

  1. Peter Fielding says:

    Richard et al . I have been staring at the screen for a while. Shall I … Shall I not ?? If I respond to the issues you raise, this reply could be long and
    I will only just go on and on. So what is the thought ?? Well , this business of data collection and analysis is complex and complicated. Definition of terms, means to collected research data, possible integration into standard clinical notation, who collects the data, from where, into what vehicle, analytic methods, promulgation, adoption of results and then the next round !! These are just a few of the issues.

    For those who have toiled in these paddy fields there are good things which can happen. However many of the ” comments” you have made and consider ” hilarious” do in fact occur ( more than half your list !!!) and demotivate the naïve and provide major challenges of information and people management to someone like myself. I was quite blind to these issues when I started a multi-centered funded clinical outcome study into system and physician variance related to Large Bowel Canter treatment by surgeons.

    To illustrate. First publication in the Lancet . .Strong stuff because we reported surgeon specific results. Important and large differences between surgeons in both short term ( post operative) and longer term ( patient prognosis) outcomes .A few days after publication I got a phone call from the BBC . How could I resist chatting with Aunty ( the affectionate term used for the BBC in its heyday)This young but well informed reporter came into my ridiculously small research office. He sat down and the small physical distance between us was a little uncomfortable.

    Usual questions about methods and results but none about why I thought this work was worthwhile( first red flag! ). After about 20 minutes he straightened his back ( second warning sign) and asked to see the surgeon specific results BY NAME ( Flags awaiving and bells aclanging. What the hell !!).

    “Not available”. But you have them? Of course. Why will you not provide them to me ? Not in the public interest. Oh really. You think that keeping the results secrete is in the public interest?? Blah Blah for a few minutes. That’s the trouble with you doctors. You always think you know best. I will go to your Dean and demand the information!. Then we shall see what’s in the public interest !!!. Sorry you feel that way. I am sure you can see yourself out!!

    OMG. OoMmGg. . Never will I do that again.!!

    And so I learned that good deeds may go punished. The medical profession found it difficult to embrace transparency in the middle 1970’s although our group of 92 surgeons had completely endorsed the view that our individual results should be published. Currently physician specific results are increasingly coming into the public domain and are subject to all the simplistic interpretations I was trying to avoid. However that’s progress in the messy real world . I prefer to pick up the pieces than maintain the paternalistic view that all doctors know best while their methods, both in detail and in principle, can be so different and their results are at least bimodal if not trimodal in form.

    Perhaps I am off the point but I felt like telling the story.

    Peter Fielding

    • richardzorza says:

      This is very helpful — and underlines that my history of facing what I experienced as defensiveness has made it easy for me to simplify. Clearly we need buy in to be successful, but we need to create a culture of expectation, and in the US that has tended to come at least in part from partial outsiders, such as funders and insurers.

  2. Toby Grytafey says:

    I think the most feasible first step to collecting better data is to create a non-profit organization to support an open source software community to create free tools needed for courts to provide a better online docket with more data. The dual purposes of the software and the community would be collecting better data and providing parties (and the advocates and lawyers who assist them) better access to information.

    Mandating uniform data standards would be so much more efficient and technically easy that it’s hard to settle for a more piecemeal solution. But in a nation of 50 states and thousands of municipal courts, we may have no choice but the piecemeal solution. But the piecemeal solution itself won’t work without the participation of a broad community.

    All of the pieces needed to build an open source community for open legal data is there. The non-profit could use bulk data access rules in many different states to get access to enough municipal court data to get the ball rolling and demonstrate the potential value of the project.

    The fact that many courts could save money by foregoing their contracts with private vendors and using the open-source online docket software would give volunteer coders an incentive to get involved with the project. Due to their expertise in the software earned by helping to create it,

    There will be a lot of work in getting the different kinds of data available from different courts to work with the open source software, and much of this work will require some legal knowledge. This will be an excellent opportunity for law students to learn how the law actually works, as well as understanding the software needed to understand the law today.

    Volunteer attorneys and law students in the community can build a database of knowledge, which will help develop a reputation of understanding specific courts. Maybe the non-profit corporation could partner with the organizations doing this work, such as Avvo.

    The non-profit could use software to allow the creation of education materials and videos that are easily mass customizable for different courts, to allow us to provide specific legal education for specific courts.

    The non-profit could help the legal aid community promote the use of pro se forms with qr codes or bar codes, in order to help build more machine-readable data into the system. To be succesful, I believe that we have to have pro se forms available in different ways – a2j, paper, forms that legal aid attorneys want to use.

    To the the excellent point made by Lisa Rush – we need to better understand how and when to use data. The non-profit could promote “legal carpentry” in the vein of Software Carpentry, an organization that helps scientists learn the computer knowledge they need to know to be better scientists.

    The non-profit could continuously use the limited data created and utilized to push for legislative action to mandate this for all courts, but we need to do whatever we can actually do, before that happens.

    • James Burdick says:

      Whenever beginning a new computer data system, start small, leaving openings for expansion. Two states that foundered in establishing ACA exchanges were Oregon and Maryland, both very supportive of the law. I suspect that in their enthusiasm they built too large from the start, had to retract, and limit it to basics first.
      In medicine we have the problem of multiple private origin data bases with different data coding which may be similar to the problems of translation of the same bit of legal information from different states that has different identifiers. In medicine there are now successful examples of centralized electronic health record systems that employ a central platform and a rigorous resolution of meaning programmed for items of data input to the platform so that the data on patient diagnoses, lab values, and so on are the same within the storage and analysis done centrally. Information flowing back to the individual doctor or hospital database is then reverse translated into the necessary format for the particular local electronic health record system. Apps can be created that access the platform for specific needs. The analogy used is that varieties of cell phones all talk with each other through the central phone system.
      Too bad we didn’t start with a simple uniform national system in electronic medical records 10 years ago, but that is another topic …

  3. Lisa Rush says:

    One our reference attorneys made a really concentrated effort to collecte data on our occupational driver’s license (driver’s license recovery) pilot project. We combined her stats with those collected by our courts to show a drop in re-arrests for driving without a license (because those who went through the program could drive legally).

    As a result, the budget office recommended the program be funded as a regular, on-going program. That is great!

    But I do wish for training on ways to identify what information to collect, what not to bother with, and how to automate or routinize the collection. Collecting stats is added workload and I don’t want to take time away from direct help unless I can be reasonably certain that the information will be valuable.

  4. James Burdick says:

    Richard et al:
    The heart story cited is apropos and I do not understand the dismissiveness you encountered.
    The following may be apparent to all and your problem hostage to other issues, but in case it is useful:
    The use of outliers to find ways to move the middle of the curve in the desired direction is a well-established principle in medicine, an important part of Continuing Quality Improvement.
    Can the LSC as a whole agree to a specific, measurable need for which you can set a very challenging but theoretically possible level of improvement? Is there any money for simple data gathering?
    If so, the breakthrough collaborative, as an example, is a mechanism to consider. You might be able to get Don Berwick and the Institute for Healthcare Improvement or some such CQI outfit to look at the issue and provide advice about this. I was involved with a collaborative a few years ago that succeeded in producing a real uptick in deceased organ donation, surely as complex a problem as the legal issues you are considering. The breakthrough collaborative approach has been used in a variety of non-medical issues as well.
    Jim Burdick

  5. Claudia says:

    Hi Richard,

    I cut my teeth before going to law school as payment policy analyst at the predecessor of what is now Medpac, looking at Medicare payments to make policy recommendations to what is now CMS and Congress. So your piece trigger these reactions:

    “If we produce data our funders will use it to reduce our budget, or reduce our unit payments.”

    Not true. If your case load is objectively more demanding, if you are in a rural area, if you are teaching students—the payment system increases the payment rates and makes special allocations for these unique characteristics to keep these valuable activities open.
    Payment systems are iterative and not static. They adjust.
    “An approach like this will mean that doctors will stop trying to help their patients.”
    Not true. Once you know what is effective and what is not—you will focus on doing the most effective work for your clients and will stop pursuing strategies that are rooted in history and not data. You will innovate more, improve, change, and provide better services. Data will spurr innovation.

    “Doctors and hospitals will stop treating the sicker people.”

    When the diagnostic related group (DRGs) payment system was implemented by Medicare in the early late 80/early 1990s—the criticism was “sicker and quicker”—meaning patients were discharged quicker and sicker due to the inability to use the old inneficient payment method.
    The payment system reacted pretty quickly to this criticism and changed and new approaches and rules were taken to prevent patient dumping and in patient discharge, to ensure that patients were not denied care when they came in the emergency room without insurance, better discharge planning was incorporated. Emergency room transfers were modified to prevent patient dumping, cost shifting was identified.
    Payment systems are not static. Policy makers are not callous—quality is always important and a goal for must agencies funding services.

    In tandem a paralel quality assurance group emerged—the Joint Commission on Accreditation of Hospitals (the JCHA) which to date continues to push really intelligent, well thought out best practices for medical institutions—and set a bar for quality.

    “Its too much of an administrative burden.”

    It is more burdensome (irresponsible too strong a word?) to all of us as a society to provide services without data and quality metrics—than to crunch data and use that provide services in a rational way.

    The agencies funding the services will need to invest in the research aspects—to make sure their payments promote access, quality, innovation. As more is known about the benefits of legal services, and how it impacts health outcomes, community stability private foundations will join to seed these efforts, for example groups like the California Endowment—that is a pioneer in looking at the connection between health outcomes and legal services in medical legal clinics. This will open new revenue sources for legal services (from the health care system, the court system, the housing system, and other federal and county funding streams for youth, etc).

    “Different kinds of procedures are just so different that you can not compare outcomes.”

    Agreed. You don’t compare a hysterectomy to treating a cavity.

    If you are using the same type of camera or imaging—maybe, if you are looking at uses/performance of specific tools.

    But yes—this is a given. You design your evaluation carefully with input from professional evaluators and statisticians. Lawyers should not design evaluations without drawing expertize from evaluation professionals.

    “Survival is not the only important measure.”

    Agreed. You need to ask the right questions. That is harder done that said—but it is doable.

    Quality of life before death is also important, ability to stay employed and support your family is also important. You need to ask the right questions. And if you find out you measured the wrong thing—you can share that with others—so they can ask better questions and design better evaluations and better samples.

    “Its not fair to judge doctors and hospitals because of failures in other parts of the system.”

    Life is not fair. The point of research is not to judge, but to learn. Take a Montessori approach—develop the love of learning. Become curious. If results are negative, do the most human thing—learn from it, get up, and adapt, change grow.

    “Sometimes patients are to blame for not following orders.”

    There are multiple reasons why some patients are not able to follow orders. The point of research is not assign blame, but to understand what characteristics, individually and together put a person at more risk to not follow recommendations and then figure out what the professionals can do to ameliorate those risks and improve compliance.

    My favorite, however, remains:
    “Every heart is different.”

    Agreed. Research does not eliminate the need for professional judgement. Judgment is required all the time to figure out when to follow the recommend approach and when to deviate from it for good reasons. That is the fun part and part of being accountable. Document the reasons, talk to the experts, document your results, test the new approach, validate it—and then the new approach will be applied to similar different hearts. Over time—we create a body of knowledge that is changing, expanding, improving over time—improving the way services are provided and making the system easier to understand and more effective.

    My last comment—as in Moadib said in Dune–

    “I must not fear. Fear is the mind-killer. Fear is the little-death that brings total obliteration. I will face my fear. I will permit it to pass over me and through me. And when it has gone past I will turn the inner eye to see its path”…..

    Happy Fourth of July ! Stay fearless in the land of the free.

Comments are closed.