Abstract: An Overview of Emerging Methods That Address Selection Bias in Quasi-Experimental Programs (Society for Social Work and Research 20th Annual Conference - Grand Challenges for Social Work: Setting a Research Agenda for the Future)

An Overview of Emerging Methods That Address Selection Bias in Quasi-Experimental Programs

Schedule:
Friday, January 15, 2016: 2:45 PM
Meeting Room Level-Meeting Room 15 (Renaissance Washington, DC Downtown Hotel)
* noted as presenting author
Blair D. Russell, PhD, Data Analyst, Washington University in Saint Louis, St. Louis, MO
Shenyang Guo, PhD, Professor, Washington University in Saint Louis, St. Louis, MO
Background and Purpose: When social work evaluations and causal analysis cannot be implemented in a randomized clinical trial, researchers must consider employing a quasi-experimental design and a method for addressing selection bias. To identify treatment effects, researchers often need to make assumptions about data collected through such designs. In practice, the empirical data always violate one or more of those assumptions. The core challenge of quasi-experimental design lies in dealing with these violations and, hence, in controlling for selection bias in a way that enhances the evaluation’s internal validity. Technically, this involves employing robust and efficient estimators to identify treatment effects under statistical assumptions. This presentation discusses two fundamental assumptions and the likelihood of violating them in empirical data. It considers why and how the presence of selection bias invalidates conventional regression and regression-type models. It also discusses recent econometric and statistical advances that can be employed to address the challenges embedded in studies using a quasi-experimental design.

Methods: This presentation focuses on the strongly ignorable treatment-assignment assumption and the stable-unit treatment-value assumption--fundamental assumptions embedded in all evaluation studies. It shows why quasi-experimental studies are likely to violate both assumptions and why regression does not work in analysis of data generated by such a design (i.e., the presence of an endogeneity problem). The presentation will discuss four methodological strategies for enhancing the internal validity of quasi-experimental studies: propensity score methods and particularly recent advances detailed in the second edition of Propensity Score Analysis (Guo & Fraser, 2014), the instrumental variable approach, regression-discontinuity design, and the interrupted time-series design. The design of quasi-experimental studies is also key, and this presentation will discuss six tasks: conceptualizing the observational study as having arisen from a complex randomized experiment, understanding the hypothetical randomized experiment that led to the observed data set, evaluating whether the data set’s sample sizes are adequate, knowing who made decisions about treatment assignment and what measurements were available to them, examining whether key covariates are measured well, and evaluating whether balance can be achieved on key covariates. Finally, the presentation will review common pitfalls in quasi-experimental studies that fail to control for selection bias.

Conclusions and Implications: Researchers using a quasi-experimental design should always be cautious about the potential mismatches between assumptions of an evaluation model and the real data, be willing to employ corrective methods that are more robust to selection bias, discuss the limitations of a quasi-experimental study explicitly and transparently with all stakeholders, and perform sensitivity analysis to ensure that findings produced by the analysis are robust, efficient, and consistent.