Image source: Urban Institute

Dr Paul Duignan on Outcomes (bit wonkish):

Paul Krugman today argues that Obamacare has had an impact on the percentage of uninsured people in the US. He provides classic examples of the type of reasoning behind two of the seven impact evaluation designs that are used within outcomes theory to *attribute* improvements in an outcome to an intervention - *time-series designs *and* constructed comparison group designs.*

Outcomes theory is an approach which provides a formal and systematic way of thinking about all of the problems you face when you start thinking about, and working with, outcomes and the interventions that it is hoped will improve them. What we are discussing here is one specific topic in outcomes theory, how and when we can attribute a change in an outcome to a particular intervention. In the outcomes theory Outcomes System Diagram, impact evaluation designs are the *second* place you go when you are wanting to attribute improvements in an outcome to an intervention. The principle from outcomes theory that covers this is discussed formally in this link.

Using an outcomes theory approach your *initial* tactic when wanting to attribute outcomes to an intervention such as Obamacare is to see if you have a *controllable indicator* reaching to the top of your outcomes model. Usually you won't, but it's worth thinking in this way if you want to be systematic in your approach to outcomes work. If you do have a controllable indicator reaching high up your outcomes model (a visual model setting out all of your outcomes and the steps you believe will lead to them) then attribution of improvements in outcomes is simple.

The mere measurement of the controllable indicator showing that it has occurred will have established attribution of an improvement in the outcome simply *by virtue of its measurement*. By definition, the controllable indicator has been caused by the intervention (that's what the definition of *controllable* means) so there's no possible dispute about attributing it to the intervention. If it is at the 'top' of your outcomes model then you've established high-level attribution.

An example of an intervention with a controllable indicator near the top of its outcomes model is immunization for diseases where you get a high rate of protection from immunization e.g. measles, mumps, rubella etc which protect more than 95% of children who have a course of immunization. This means that just measuring the controllable indicator that you've immunized a certain number of children, by virtue of its measurement, proves that you've reduced morbidity amongst that group (the high-level outcome). In other words, it means that by just measuring the number of children immunized you have achieved attribution of improvements in the high-level outcome to your intervention, end of story.

Of course, in the case of proving whether Obamacare is affecting the outcome of reducing the percentage of uninsured people in the U.S., there will be a range of factors influencing this outcome. By definition, this makes the percentage of uninsured people a *not-necessarily controllable indicator* when looking at Obamacare as an intervention. Because of this, merely measuring that the number of uninsured people is falling, as is the case with Obamacare, does not, in itself establish that Obamacare is causing this to happen. In other words, we have an attribution problem.

In such situations, outcomes theory tells us that there's only one other tactic we can employ to establish attribution and that is looking at what is possible in terms of more one-off impact evaluation designs. (If you did not go the the link before where this is set out formally as an outcomes theory principle, have a look now if you have time).

Paul Krugman in his blog post is trying to argue that Obamacare is making a difference, in his post he's attempting to counter those who are claiming that the reduction in the number of uninsured people is simply a result of the economy improving and not a result of Obamacare. In outcomes theory terms he is trying to 'attribute' a reduction in the uninsured to the introduction of Obamacare.

He takes the graph above which shows a fall in the number of uninsured and in response to those who are saying: 'It's not Obamacare, it's the improving economy', he replies: 'But it isn't. The decline is too sharp, too closely associated with the enrollment period to be driven by the at best gradual improvement in the job market.'

This is classic *time series analysis *reasoning. The logic of the time series approach to impact attribution evaluation is that you have a series of observations *plus* a clear point in time when an intervention commenced. This means that if you look at a graph of a high-level outcome at the point when the intervention started (or some credibly argued lag required for the intervention to kick in) and you see the outcome improving, then you can simply claim that you've established attribution because of this coincidence of timing. There are various ways that statistics can be used on time series with lots of data points, but the basic reasoning is what we have Krugman arguing here.

He then adds to his line of impact attribution argument by adopting the rationale behind another of the seven possible impact evaluation design types used in outcomes theory - *constructed comparison group* impact evaluation designs. He takes a graph produced by the Urban Institute (where, by the way, I had the pleasure of undertaking a Fulbright Senior Scholar Award a few years ago). This graph breaks down reductions in those who are uninsured based on whether states are helping implement Obamacare or blocking it. This is done by looking at whether they are expanding Medicaid (*helping states*) or not (*blocking states*).

Image source: Urban Institute

This graph provides a conceptually different line of argument attempting to attribute improvements to Obamacare. It allows a comparison arising from a 'naturally occurring experiment' which is one way in which a constructed comparison group impact evaluation design can be used. Naturally occurring experiments can be contrasted to specifically set up *true randomized experiments* (the first type of impact evaluation design within outcomes theory) in that there's no experimenter who assigns units to either receive the intervention or to remain as untreated controls.

Looking at the Urban Institute graph, Krugman's logic here is that: '. . . an improving economy can't explain why the decline in uninsured is three times as large in pro-Obamacare states as it is in anti-reform states.'

Note that from a technical point of view, this constructed comparison group design, while the graph shows a series of observations, does not have to rely (in contrast to the time series design) on there being multiple measurements over time. It could, in theory just use a 'before and after' observation.

Arguments about impact attribution, where, as is often the case, you'd don't have a controllable indicator near the top of an outcomes model (as in the case of immunization) always have to look at the possibility of impact attribution evaluation for establishing attribution. As is the case here, you can either rely on a single impact attribution design or a combination of designs as is the basis of the argument Paul Krugman was advanced today.

Paul Duignan, PhD. You can follow me on Twitter.com/PaulDuignan or contact me here. Discuss this post in the Linkedin DoView Community of Practice at http://tinyurl.com/doviewplanningln.

Back to the DoView Blog.

Image source: Urban Institute