Methods: This presentation focuses on the strongly ignorable treatment-assignment assumption and the stable-unit treatment-value assumption--fundamental assumptions embedded in all evaluation studies. It shows why quasi-experimental studies are likely to violate both assumptions and why regression does not work in analysis of data generated by such a design (i.e., the presence of an endogeneity problem). The presentation will discuss four methodological strategies for enhancing the internal validity of quasi-experimental studies: propensity score methods and particularly recent advances detailed in the second edition of Propensity Score Analysis (Guo & Fraser, 2014), the instrumental variable approach, regression-discontinuity design, and the interrupted time-series design. The design of quasi-experimental studies is also key, and this presentation will discuss six tasks: conceptualizing the observational study as having arisen from a complex randomized experiment, understanding the hypothetical randomized experiment that led to the observed data set, evaluating whether the data set’s sample sizes are adequate, knowing who made decisions about treatment assignment and what measurements were available to them, examining whether key covariates are measured well, and evaluating whether balance can be achieved on key covariates. Finally, the presentation will review common pitfalls in quasi-experimental studies that fail to control for selection bias.
Conclusions and Implications: Researchers using a quasi-experimental design should always be cautious about the potential mismatches between assumptions of an evaluation model and the real data, be willing to employ corrective methods that are more robust to selection bias, discuss the limitations of a quasi-experimental study explicitly and transparently with all stakeholders, and perform sensitivity analysis to ensure that findings produced by the analysis are robust, efficient, and consistent.