To understand the importance of this, it is useful to understand that in the US there is an obsession with government action crowding out private sector provision of goods, coupled with the notion that it is universally bad. It is true that if government provides something with low or no charge it can take business away from beneficial private markets. Even if the market is providing a better product at a lower social cost, if the government starts giving it away people might abandon the private alternative. But crowding out can be beneficial since for many goods, like medical care financing, government can do a better job than the private sector. Frankly it would be great if employer-provided health insurance which is incredibly inefficient (in terms of both costing more than it would cost government and crippling many businesses with a huge competitive disadvantage) was entirely wiped out by a government alternative.
But the obsessive worry about crowding out is so crazy that there are those on the political right who argue that if only the government would just stop providing highways and schools the private sector could do a better job. So the press in the US, which skews to the right, picked up on the McKinsey study and touted it as evidence that medical financing reform was bad.
It quickly became apparent that McKinsey could not back up their claim. Greg Sargent of the Washington Post posted an early summary:
But as a number of critics were quick to point out, McKinsey’s finding is at odds with many other studies — and the company did not release key portions of the study’s methodology, making it impossible to evaluate the study’s validity.Based on that, Krugman commented:
There’s now been a new twist in this story. I’m told that the White House, as well as top Democrats on key House and Senate committees, have privately contacted McKinsey to ask for details on the study’s methodology. According to an Obama administration official and a source on the House Ways and Means Committee, the company refused.
One has to assume that there was something terribly wrong with the study. At any rate, nobody should be citing it until or unless McKinsey comes clean. Oh, and if you ask me, this is a lot more important than some sex scandal.Krugman went on to note Brian Beutler's post:
But multiple sources both within and outside the firm tell TPM the survey was not conducted using McKinsey’s typical, meticulous methodology. …. And that’s created a clamor within the firm at high levels to set the record straight. “This particular survey wasn’t designed in a way that would allow it to be peer review published or cited academically,” said one source familiar with the controversy. …. Reached for comment today, a McKinsey spokesperson once again declined to release the survey materials….At The Incidental Economist blog (who I recently picked on in my not-yet-finished comments about the cost of smoking, but will quote positively here), one author called it "Dangerous faux research" and wrote:
Look, anybody can say what they like on a topic. They can put out a glossy report. They can claim they did a “survey” to make it sound scientifically rigorous. They can talk to the media all about it. They can stand behind their good name and reputation, if they have one. But when what they’re saying runs counter to previous experience and other credible estimates, they’d better have a good explanation. But, McKinsey has no explanation. None. They’re stonewalling.
….You know what would happen to me if I tried that? Suppose I sent my new results to a journal, results that were very different from that of others, and said, “Trust me. They’re good.” Well, my paper would be laughed out of the editorial office. And that’s as it should be. That would not be research. That would be the opposite of research. That would be indistinguishable from making things up.The primary reason I am writing about this is not to point out a bit of bad research about health economics, but to contrast the reaction with the typical reaction to research in health science itself, which often produces results that are equally unexpected, and that consistently fails to report key bits of the methodology, and few authors are honest enough to respond to requests to fill in what is missing. You are familiar with the reaction to that: Nothing.
We generally have about as much idea about what produced health science research results as we do about what McKinsey did. And though I do completely agree with that last quote – there is plenty of junk science that is published in health economics journals, and lots of study inputs come from black-box sources, and a study with mystery methodology is not exactly the same as making things up, though we do not know for sure. It is certainly true that anyone can put out a glossy report, trade on their reputation, and make a bad survey sound like science – pretty much sounds like health science to me. The only thing missing in health science is that the apparent consternation of an institution concerned with it reputation that is evident in the previous quote; I can only think of a handful of such cases ever.
In fairness, the missing information from the McKinsey study is not as subtle as the problems with most health research. McKinsey omitted extremely basic information, given that their study was basically the answer to a single yes-or-no question. For a list, see this story in Time (which another Incidental Economist blogger quite amusingly described with: "Kate Pickert committed an actual act of journalism, and tried to get McKinsey to give her the necessary information. So far, they have refused. Her whole piece is worth reposting, so just go read it now.").
Still, sometime health science reports are missing this same information is missing and almost always are million equally critical information. Any expert can recognize this, but the reaction is pretty much nothing: Reporters just report the new bit of "truth". Subject-matter expert bloggers might point out that the result is odd, but they ignore the failure to report the methods. As for the government demanding more information, yeah right. The government is accepts black-box health analyses as readily as the press does and, indeed, produces more of them than anyone else. It seems the only time they bother to probe is when the study result has no practical implications but might affect their political bickering.
A more subtle point is just as damning. Notice that the fight over the McKinsey results comes from the fact that it is a substantially larger number than previously estimated. When health research is reported, this is generally not even noticed. The review of previous studies in a new research report, let alone in the press reports about it, almost never distinguishes between big and small. The common statement, "this result is consistent with previous studies that found an elevated risk…", might mean that the other studies estimated a completely inconsistent level of risk (much lower or higher), but only the fact that it is also elevated is considered. By the standards of health science, the McKinsey report would not have been controversial; it would have been "consistent with" previous studies that showed that some employers will cut coverage.
It is pretty clear that the current fight is being carried out by pundits interested in how we pay for health who have no clue that the problems with the McKinsey study are pretty much de rigueur for the studies used to decide what to do with that money.