Edit: In retrospect, this post might give some the impression I think there’s something wrong with the study (the unqualified comparisons to R-R, general cynicism, etc.) It’s more a concern about the way in which journalists depict nuanced statistical realities. Also, I didn’t consider the chance that perhaps Medicaid isn’t all it’s cracked up to be. This study brings to bear some harsh truths, and maybe further consideration is required. But I’ll say this, for some Medicaid is worth it just for the risk insurance it provides the poor (then they can invest in equity markets helping entrepreneurs and general welfare). Or as Paul Krugman points out, fire insurance doesn’t prevent fires.
So by now everyone has probably heard of the oh-so-devastating Oregon Health Study (OHS). Before I begin, let me point out, this is exactly the kind of research we need to inform evidence-based policy. But science is nuanced with caveats which never make it through to articles published by statistically-illiterate journalists. Poor bloggers and health researchers need to do garbage cleanup, by which time the general public already believes nonsense. Note, there are some excellent pieces by the likes of Megan McArdle and Peter Sunderman. But standard op-eds aren’t nearly so in-depth and that’s what’s read by the mass public.
It’s also important to note that if we based all our policy on RCTs, we’d have a substantially more robust daycare program, consumer protection laws oh, and did I mention, preventative care. Wait what?
Good science is a lot more sophisticated than “proving things”. Let’s say I have a belief set regarding socioeconomic and medical effects of Medicaid. I state a null hypothesis which is Medicaid does not have a significant (more on this later) effect on my variables of interest. And a maintained hypothesis which is that there is such a correlation between my policy and variable.
To prove something works, we have to significantly reject the null in favor of the maintained hypothesis. OHS failed to do this. But devious journalists peddling myths will try to tell you this wonderful study proved that Medicaid does not work. Actually, it just couldn’t prove that it does. In a world of informed people: big difference. In the real world: mass communication of misinformation.
Let’s synopsize the study. Medicaid (quoted):
- Reduced observed rates of depression by 30%.
- Virtually eliminated out-of-pocket catastrophic medical expenditures.
- Increased the probability that people report themselves in good to excellent health (compared with fair or poor health) by 25 percent.
- Increased the likelihood of using outpatient care by 35 percent, using prescription drugs by 15 percent, but did not seem to have an effect on use of emergency departments.
- Medicaid significantly increased the probability of being diagnosed with diabetes.
Okay, and what “didn’t” it do?
- Medicaid has no statistically significant effect on measured blood pressure, cholesterol, [a measure of diabetic health]
Woah… Are you actually telling me journalists can totally distort scientific findings and influence policy in a severely detrimental way?! Of course, anyone who paid attention to the whole Rogoff & Reinhart debacle isn’t going to be surprised. But we have the social memory of a duck (I’m assuming that’s low, please correct me if I’m wrong). The myth-peddling will begin. The Wall Street Journal which, by the way, might as well draw graphs with crayons…
…will tell us that Obamacare is all for nothing. David Brooks will tell us that our policy should equal (Obama + Heritage)/2 because, hey, if its centrist it must be good. We’ll be told that the government is irresponsibly spending money and helping big corporations and the poor man is suffering. Doom I say. DOOM.
But this should actually – if marginally – strengthen the argument that Medicaid is good.
Here’s Brad DeLong with a breath of serious sensibility:
Having [OHS] I conclude […] that the jury is still out on most of the effects, [and] that the success of the Medicaid expansion in reducing depression and improving financial security are rock-solid [and] by themselves worth more than the net cost of the expansion.
Therefore I wish to withdraw my tweet from last night:
@tylercowen @petersuderman the case for Medicaid expansion is *marginally* less strong than I had thought…
And substitute for it, instead, the correct:
.@tylercowen @petersuderman CORRECTION: I have read the paper and evaluated its power, the case for Medicaid is stronger than I had thought.
Please read this again. And if echoes of “Obama that Kenyan Communist…” still ring in your head. Read it again. And, once you’re done, read the absolutely fantastic piece by Aaron Carroll and Austin Frakt:
So chill, people. This is another piece of evidence. It shows that some things improved for people who got Medicaid. For others, changes weren’t statistically significant, which isn’t the same thing as certainty of no effect. For still others, the jury is still out. But it didn’t show that Medicaid harms people, or that the ACA is a failure, or that anything supporters of Medicaid have said is a lie. Moreover, it certainly didn’t show that private insurance or Medicare succeeds in ways that Medicaid fails.
Read it through. It’s worthwhile. If your opinions are derpish it will fix them. And if they’re not you can smash the next person who cites “that
R-R OHS study” as evidence for “ fiscal consolidation ending Medicaid”.
And there’s also the important point that statistical significance isn’t always… what’s the word… significant? For one, OHS is a two-year study. Preventative healthcare isn’t an overnight thing. It’s cheaper for society in the long-run. Oh, and, even if it doesn’t cost less it is ipso facto better. It’s good not to get sick. It’s also (believe it or not!) good when poor people don’t get sick.
Stop fetishing statistical significance. I can find a significant correlation between the number of Facebook users and the Greek debt crisis. Conversely, it’s also possible that a policy is causal and important without observed significant correlation. It’s common knowledge that correlation doesn’t imply causation, but the reverse isn’t nearly as true. Let’s create an example from this study. (Adapted from a real life example from Steve Pizer)
x = some public health outcome [binary]
y = Medicaid [binary]
z = something else really important [binary]
Now, how do we construct this into a model? Truth table time!
Okay, so. x = (1 – z)*y + error
Cool, but why does this matter? If we ignore (that is, don’t estimate) z, there is no correlation between Medicaid and a given public health outcome. But there is plenty of economic theory and empirics to tell us this is not the case. We have, therefore, a strong experimental and theoretical prior that the relationship between x and y is important. And if we observed this, statistical techniques can tell us what the real association between Medicaid and public health is. (The original was about illness and death).
So actually, OHS should just inform us that there are other, important and significant variables out there. Maybe we should spend some time thinking about what those might be. But we certainly shouldn’t throw out years of evidence and belief based on one study, that might have a totally different interpretation. Has no one stopped to think, “Hmmm. Maybe we’re not measuring something important?”
Or, much more importantly, have any journalists stopped to ask, “Hmmm. What are the caveats, here?” Significance is not the be-all, end-all. As Justin Wolfers notes with a big data set, we can find questionable but significant correlation between arbitrary variables. It’s kind of like a transcendental number. In the decimal points, every possible permutation exists. This means Pi contains your life story. It predicts the future. And the past. Somewhere. But that doesn’t mean much (it is beautiful, though).
All this underlies the importance of theory. I know people tend to bash this stuff for “hard empirics”. But experiments mean nothing without a framework in which to understand them. Sometimes if it feels wrong, it is wrong. And none of this is to mention the big correlations between Medicaid, financial security, and depression. A 33% decrease in mental health symptoms. That’s big news. That’s fewer Adam Lanzas. This is, as a stoner would say of his joint, good shit.
And when ACA rolls out, it will just get better. Watch out for the ideologues peddling myths and the journalists that let them.
Pingback: How Not to Make an Argument | This is Ashok.
Pingback: Ordinary Claims Require Only Ordinary Evidence | This is Ashok.