Modeling Treatment Effect Heterogeneity: Recent Advances Using Propensity Scores

Sunday, January 18, 2015: 10:00 AM-11:45 AM
Balconies N, Fourth Floor (New Orleans Marriott)
Cluster: Poverty and Social Policy
Shenyang Guo, PhD, Washington University in Saint Louis
Background and Purpose

In social work research and evaluations, researchers often need to test heterogeneous treatment effects. This need stems from substantive theories and the designs of observational studies in which study participants are hypothesized to respond to treatments, interventions, experiments, or other types of stimuli differentially. For instance, in a program evaluation, researchers often need to identify who benefit the most from the intervention to reveal the whole range of complexity of treatment effects, rather than a simple determination regarding whether the treatment is effective. To address effect heterogeneity, studies have found that the inclusion of interactions of treatment indicator variable by participants’ characteristics in an outcome model is problematic (Crump, Hotz, Imbens, & Mitnik, 2008; Elwert & Winship, 2010; Xie, Brand, & Jann, 2012). The recent advances in program evaluation with propensity scores show that the inclusion of an interaction of the treatment indicator variable and the propensity score is the most efficient and interpretable way to model effect heterogeneity, because a propensity score summarizes the relevance of the full range of covariates that produce selection bias. This workshop aims to review the latest development in this area, and provide participants with methods to test the existence of effect heterogeneity and to model the effect heterogeneity.


The workshop focuses on the following topics: 1) why modeling effect heterogeneity is important and what kind of research questions in social work warrant an investigation using this method? 2) Crump et al.’s (2008) nonparametric tests to detect the existence of treatment effect heterogeneity; 3) Xie et al.’s (2012) three methods to model effect heterogeneity; 4) an illustrating example; and 5) implications and conclusion.


The illustrating example evaluates the impact of poverty on children’s academic achievement. Using the 1997 Child Development Supplement (CDS) to the Panel Study of Income Dynamics (PSID) and the core PSID annual data from 1968 to 1997 (Hofferth et al., 2001), the analysis using the Crump et al.’s nonparametric tests suggests that we can reject the null hypothesis of a zero conditional average treatment effect (p<.05) and the null hypothesis of a constant conditional average treatment effect (p<.10); and therefore, effect heterogeneity may exist in this sample. Using the Xie et al.’s stratification-multilevel method, the study finds that the effects of poverty on child’s letter-word identification score in 1997 are not homogeneous and vary by the propensity of using a welfare program.


Participants will learn from this workshop: 1) how to use the recent advances in propensity score analysis to address important research questions; and 2) tests and modeling strategies with hands-on experiences to address fundamental issues encountered from program evaluation and research concerning causality.

Pedagogical Techniques

The workshop will use PowerPoint slides to show the main contents and a computing session with Stata to illustrate the testing and modeling process.

See more of: Workshops