A few days ago, I blogged about recent massive reductions in heart attack deaths, and the contribution that the establishment of required reporting of comparable data, and the setting of clear goals, contributed to those improvements.
While doing so, I planned to include an ironic listing of the kinds of arguments likely to be or to have been used in both legal and medical fields to argue against this powerful innovation approach.
But then I had my doubts, so I polled some folks I respect with a draft, and got very mixed responses, from “go with it — hilarious” to “better coming from someone else.” It was urged that “the tasks are difficult on the merits,” and that my ironic arguments caricatured as resistance to change might in fact be legitimate as far as they went. It was aruged that what is needed now is thought leadership in developing indicators, rather than criticism for the failure to do so, so that we can transcend the reasonable resistances. And, indeed, it is true that doing this right is hard.
It set me thinking, and I realized that I wanted so badly to go with the list for not entirely healthy reasons. I have just sat in too many meetings over the years in which data gathering and particularly comparative data gathering has been argued against. This includes one Summit in the mid 80’s and one in the early 2000s. In the first case I was probably arguing for caution, in the second less so. In the second, the facilitator asked something like “is there any way that we could set this up so people would be able to go with this,” and one person replied “no.”
So there is a history there. Perhaps the most throughout comments I got back on my proposed draft was something like. “Do not fight the last war.”
I would like to think that that is the case, and that with the rapidly growing commitment to research, we can focus on how to ensure true outcome measures and true comparability. I particularly hope that in the intermediate time period, in which outcome measures are not yet fully comparable, we find ways to make them effectively so (such as by looking at changes in different programs outcomes over time, and seeing who is most effective at improving theirs.
In which spirit, I offer my initial ironic thoughts on how people might have argued against the hugely impactful heart attack studies. (See for example, an argument here that “the impact for patients was unclear because of heterogeneity in presentation and severity of illness for unselected admissions, and challenges in the definition of ‘specialist’ relative to individual clinical need, [commenting on time to specialist measure].”
“If we produce data it will be used by our opponents.”
“If we produce data our funders will use it to reduce our budget, or reduce our unit payments.”
“An approach like this will mean that doctors will stop trying to help their patients.”
“Doctors and hospitals will stop treating the sicker people.”
“Its too much of an administrative burden.”
“We report too much to funders already.”
“Different kinds of procedures are just so different that you can not compare outcomes.”
“Survival is not the only important measure.”
“Its not fair to judge doctors and hospitals because of failures in other parts of the system.”
Sometimes patients are to blame for not following orders.”
My favorite, however, remains:
“Every heart is different.”
I have to admit that I too have been fearful of the impact of data on the credibility of directions that I believed were worth following. It is hard not to become vested in such directions, and harder still to balance the risks against the benefits.
I guess the moral is while these are complex matters, we need to be careful not to let anxiety about downsides blind us to the huge advantages of data in general and outcome comparison in particular. In the end, it is only be all committing to a data driven culture that we will built an access to justice system strong enough to take those risks in its stride because the general case has been so strongly made — by the data itself.
Ideas on how to design an appropriate system of outcome comparison as welcome in the comments. (Indeed, one of people who reviewed the drag blog pointed out that “nor are you approaching [the issue] in an alternative way, would be by articulating proposed measures.”) It will surely be helpful to this discussion when the LSC-led process on improving data gathering publishes its results and recommendations.