Paul Krugman has a brilliant and sobering column on what he calls the Excel Depression.
Core idea. The austerity policies adopted worldwide were driven by academic work about the benefits of such policies.
Finally, Ms. Reinhart and Mr. Rogoff allowed researchers at the University of Massachusetts to look at their original spreadsheet — and the mystery of the irreproducible results was solved. First, they omitted some data; second, they used unusual and highly questionable statistical procedures; and finally, yes, they made an Excel coding error. Correct these oddities and errors, and you get what other researchers have found: some correlation between high debt and slow growth, with no indication of which is causing which, but no sign at all of that 90 percent “threshold [claimed in the earlier work].” [Bold added.]
In response, Ms. Reinhart and Mr. Rogoff have acknowledged the coding error, defended their other decisions and claimed that they never asserted that debt necessarily causes slow growth. That’s a bit disingenuous because they repeatedly insinuated that proposition even if they avoided saying it outright. But, in any case, what really matters isn’t what they meant to say, it’s how their work was read: Austerity enthusiasts trumpeted that supposed 90 percent tipping point as a proven fact and a reason to slash government spending even in the face of mass unemployment.
Its scarey — and should lead all of us to check and check — particularly when our data is used in policy debates. It should also lead us to be careful to rebut it when some make excessive claims from ambiguous or uncertain data.
Moreover, as we move to dynamic triage and other more complex systems, the risks become greater and harder to monitor. We need to build in a wide variety of checking mechanisms. At the same time, we must remember that current systems are making errors all the time, and not become paralyzed.