Editor’s note: This article was originally published here by contributing writer Emily Lu on her blog, Medicine for Change.
Like every other health policy nerd out there, I’ve been following the debate over the Oregon Medicaid experiment results about as closely as most Chicagoans followed the Bulls game. For those not up to speed, here’s a quick replay:
- Due to financial constraints, Oregon was only able to expand Medicaid enrollment by 10,000 in 2008, though many more people were eligible to enroll, so they only allowed people to enroll by lottery.
- Social scientists quickly realized that this was the ideal “randomized controlled trial” of Medicaid (or any health insurance) anywhere, and started tracking the outcomes of the 10,000 enrollees that won the lottery and some of those that did not.
- They published their results last week, and now everyone is either convinced that Medicaid definitely works or that it definitely doesn’t. Some have described this as the Rorschach test for whether you are someone who believes in Medicaid (and by extension, health care reform), or doesn’t.
The results of the study, most simply put by the study itself, are that the study authors:
found no significant effect of Medicaid coverage on the prevalence or diagnosis of hypertension or high cholesterol levels or on the use of medication for these conditions. Medicaid coverage significantly increased the probability of a diagnosis of diabetes and the use of diabetes medication, but we observed no significant effect on average glycated hemoglobin levels or on the percentage of participants with levels of 6.5% or higher. Medicaid coverage decreased the probability of a positive screening for depression (−9.15%; p=0.02), increased the use of many preventive services, and nearly eliminated catastrophic out-of-pocket medical expenditures.
As others have already blogged extensively, there are very good mathematical reasons for why the study was not able to achieve statistical significance, even when many of their desired outcomes did move in the direction they were hoping. Mainly, the study was underpowered (i.e. did not have enough people to achieve statistical significance), particularly since the experimental group ended up being relatively healthy–only about 7% had hypertension and only about 5% had diabetes!–not to mention the numerous other limitations revealed when one looks at the details of the study design in the appendix.
Other writers, especially physicians, have simply thrown up their hands and said, “look, we never said that health care insurance was anything more than a first step towards better health care for all.” It reduces the financial burden of health care and prevents catastrophic out-of-pocket medical expenditures. It increases access to preventive services and starts people on the necessary medications for chronic conditions like diabetes–what else could you expect from just providing health insurance, when we haven’t yet solved the puzzle of health care quality or health care coordination?
I love the world of health services research, and respect all the commentators I’m linking to here, but I can’t help feeling like all this policy wonkish internal baseball is missing the point. How am I supposed to explain this to my patients or my friends that aren’t also social scientists? Am I supposed to go into how this highly limited study with gold standard methodology came up with this really convoluted result and what that (doesn’t) mean?
I think that there are two fundamental problems here. First, the over-emphasis on medical disease outcomes as the primary result of the study. Yes, diabetes and hypertension control are important. Sure, we would all like to see a trial that reduces people’s Framingham risk scores for heart disease. Yet, all these things are arguably less important given their low prevalence in this population as compared to what was relatively prevalent in this population: depression. And guess which leading cause of death and disability in the US was seen to be addressed by this study? Depression. As noted by the study authors, there was a statistically significant decrease in Medicaid patients who were depressed as compared to patients without Medicaid. This is no small feat, and should be celebrated rather than ignored.
Secondly, and perhaps more importantly to students of medicine like myself, the arguments around this study serve as a reminder of how far academia can get from the point: what does this all mean for patient care? At the end of the day, do I, as a future primary care doctor, care more about how much health care costs and whether it impacts heart disease specifically, or whether my patients will be able to access it when they need it and derive some clinical benefit from it? (My cardiology colleagues may think differently, but would understand if the example disease happened to be another specific disease, like cancer). The policy wonks may focus on costs and heart disease, but as a health care provider, I need to be focusing on what enables all my patients to access better health care.
I applaud the study authors for the careful research that they have done, and welcome continued discussion of how this informs future research and so forth. However, for physicians and others trying to determine based on this evidence what is true or is not, we need to remind ourselves that equitable health care access for all is still the right thing to support. These results may be one more piece in the research literature to help us understand what that means, but it doesn’t change what the right thing to do is, for all our patients.