Does Medicaid reduce crime?
Adventures in evidence-based policy
A few days ago I did what any normal person does on a Wednesday night, and took two printed out NBER Working Papers to the local brewery to dig in. Both related to evidence-based public policy.
One is related to Medicaid and its impacts on crime. The other focuses on 'nudge' experiments and what happens when you try to scale them. More on that in a future post.
But first, does Medicaid coverage reduce crime?
Medicaid’s impact on crime
This paper, “The Effect of Medicaid on Crime: Evidence from the Oregon Health Insurance Experiment” (Amy Finkelstein, Sarah Miller & Katherine Baicker), tackles a claim that’s become increasingly popular in policy circles: that Medicaid doesn’t just improve health or financial stability. It also reduces crime.

Why might we think that? Some studies suggest it does. And we know Medicaid improves mental health and helps people struggling with addiction. Plus, it improves people’s financial situation.
Theoretically, wouldn’t you expect people with more money, better mental health, and less addiction to commit less crime? Not a crazy theory.
Oregon’s 2008 Medicaid lottery gives the authors a rare chance to test that cleanly. Oregon randomly assigned thousands of people to a limited Medicaid expansion. Because the lottery created something close to a randomized experiment, you can actually see whether gaining coverage changes interactions with the criminal justice system.
Across a big sample of low-income adults, the basic finding is pretty stark: Medicaid just doesn’t move crime rates. Charges, cases, and convictions are basically identical between the lottery winners and non-winners. The confidence intervals are tight enough that they can rule out large effects. If insurance matters at all here, it’s modest.
Sometimes in big studies like this of the general population you won’t see a big effect, but then you do if you zoom in on a sub-group. What’s striking is that the null result holds even for the groups you’d expect to be affected. People with prior charges and prior convictions (i.e. people already involved with the criminal justice system) look the same with or without Medicaid.
Against the grain
This cuts against a run of quasi-experimental work I mentioned above, which suggests that Medicaid does reduce crime. But those studies tend to focus on narrower, higher-risk groups (like those re-entering society after incarceration), and often use imperfect data (like crime reports with spotty coverage).
When you line up their effect sizes against the Oregon RCT, the Oregon confidence intervals essentially exclude most of the big, dramatic reductions other papers claim.
The upshot isn’t necessarily that “Medicaid doesn’t matter at all for crime.” It might be closer to “whatever benefits exist are concentrated in specific populations that are hard to measure at scale.” The Oregon experiment studies a general low-income adult group. Low income people who qualify for Medicaid may have more contact with the justice system than the average person, but nowhere near as much as the reentry population studied elsewhere.
It is worth noting that Medicaid still improves mental health, reduces financial strain, and increases outpatient care. Those effects are still real. They just don’t seem to translate into broad reductions in crime at the population level.
As clean causal evidence goes, this is about as close as we get to a definitive finding in social science. It’s mostly a story about limits, not hidden crime-fighting power.
Why evidence-based policy matters
This finding fits a broader pattern I've been tracking after reading Vital City/Niskanen’s illuminating symposium on Megan Stevenson’s work last year.
In short, evidence-based policy is vital, because a lot of what we think works in the social world doesn’t actually. I wrestled long-form with these ideas here and here, but here’s a quick distillation:
When we use the highest quality methods (e.g., RCTs) to evaluate the impact of social policies, they usually don’t work.
Even when we have really high quality evidence for a particular program, it usually fails when we try to scale it up or try it in a new place.
As Stevenson puts it in her excellent paper, the interventions that fail most often rely on what is called a “cascade effect.” That is, they think that if we fix one area of someone’s life, it will have enduring ripple effects and set them on a new path. Basically, a domino effect.
Put another way, when we try to fix someone’s health by improving education, it doesn’t usually work. Maybe we can focus on their health and improve their health, or focus on their education and improve their education, but the kind of spillover effects that make sense in theory often don’t pan out in practice.
OpenResearch's recent studies of Universal Basic Income showed the same pattern. They found that cash transfers helped people have more financial security and allowed them to spend more, but hoped-for effects on mental health, childhood education, and other factors failed to materialize.
I unpacked these findings in another series of blog posts (here’s post 1 and post 2) and I think they pose a significant challenge to really common frameworks for understanding poverty and health, particularly interventions targeted at Social Determinants of Health:
How does this study fit in?
This high quality study supports Megan Stevenson’s critique of “cascade effects.” The story is the same as that of many other policy failures: improving people’s health insurance didn’t impact their criminal activity. It’s pretty high quality evidence, written up with a lot of rigor by people who engage well with counter-arguments and it’s worth taking seriously.
More targeted interventions might still work, like focusing on high quality health insurance coverage for those at highest risk of committing crimes. But that hypothesis also looks worse after this paper.
To be clear, I still think expanding Medicaid is good. But we just shouldn’t sell it as a panacea to other social problems. If we remove the hope for a cascade effect, we should just expect health policy to impact health. That’s perfectly fine, but we should stop overpromising.




You *print out* papers to read them at table? I put everything on my phone: not only can I carry around a library full of books and papers, but in the restaurant I look like anybody else looking at their phone, not like some geek who is studying category theory in a burger joint.