Session: A Preference for Difference-in-Difference over Ancova in Non-Random Designs (Society for Social Work and Research 21st Annual Conference - Ensure Healthy Development for all Youth)

126 A Preference for Difference-in-Difference over Ancova in Non-Random Designs

Schedule:
Friday, January 13, 2017: 3:30 PM-5:00 PM
Riverview I (41st floor) (New Orleans Marriott)
Cluster: Research Design and Measurement
Speaker/Presenter:
Roderick A. Rose, PhD, University of North Carolina at Chapel Hill
Significance. Social work research is conducted to test interventions that promote social change, a causal question. To get causal inference from statistical associations, typically randomized designs are used. Often randomized designs are not possible or ethical, so non-random designs are used. However, the methods chosen must fit how the data were generated. If the non-random intervention is associated with measured confounders of the outcome, it is straightforward to condition the estimation of the treatment effect on these confounders by including them in a regression model. Trouble occurs when confounders are not measured.

One popular design that is discussed widely in major design texts is the two period pretest-posttest or ANCOVA design. Implied in the ANCOVA design is the critical assumption that unmeasured confounders associated with the posttest must be completely mediated by the pretest. By conditioning on the pretest, we assume that the confoundedness is thereby eliminated, and the treatment effect from the ANCOVA is unbiased.

This assumption may be faulty in many settings. Our own intuition suggests that these confounders may have a direct effect on the posttest. Prior research supports this, and the pretest is often a poor proxy for unmeasured correlates of the treatment effect. Worse yet, the pretest and posttest may be affected by a common unmeasured variable. Causal (directed acyclic) graphs suggest that such a common variable, though unrelated to the treatment effect, could become associated with the treatment effect if the pretest is conditioned on. In effect, the pretest has the potential to make the bias worse.

An alternative method, which has become a standard approach in econometric settings and is very simple to implement in two period designs, is to estimate the treatment effect with difference-in-difference (DD). DD controls for all unmeasured correlates of the treatment effect except those that are unique to the treatment group in the treatment period. Although this caveat underscores that it is not a perfect method, it is an unambiguous improvement over ANCOVA in similar situations.

Content. This is an applied workshop focusing on the use of DD in non-random intervention research settings. I will review research and provide theoretical evidence of the potential for problems with ANCOVA and present the findings of Monte Carlo simulations that verify this potential. These simulations will provide insight into the magnitude of the bias that can be expected under a range of associations between the variables described above. 

Implications. Coleman’s work into the effect of private schooling on achievement was groundbreaking but flawed because it relied on poor assumptions about the pretest. Lord’s paradox, that one set of data can give different answers depending on whether ANCOVA or difference scores is used, has been known for some time. Social work research is replete with promising interventions studied under non-random designs utilizing ANCOVA in settings where DD would be better. Although there are tradeoffs, and ANCOVA is acceptable in randomized designs, the DD should become a standard part of doctoral education, and should be the preferred method in nonrandom designs.

See more of: Workshops