Session: Pros and Cons of Propensity Score Analysis: Current Debates (Society for Social Work and Research 21st Annual Conference - Ensure Healthy Development for all Youth)

203 Pros and Cons of Propensity Score Analysis: Current Debates

Schedule:
Saturday, January 14, 2017: 9:45 AM-11:15 AM
Mardi Gras Ballroom C (New Orleans Marriott)
Cluster: Health
Speakers/Presenters:
Shenyang Guo, PhD, Washington University in Saint Louis and Mark W. Fraser, PhD, University of North Carolina at Chapel Hill
Background and Purpose

Propensity score analysis (PSA), an enormously popular method of processing data for causal inference, has been increasingly applied to social work research to address challenging research issues. A recent journal review finds that this method is among the ten most popularly applied quantitative methods in social work research (Guo, 2014). Despite its popularity, the debate about its usefulness in addressing selectivity has never ceased since its birth when Rosenbaum and Rubin published their seminal paper in 1983. Proponents believe that the method “has matured to the extent that it has much to offer the empirical researchers (Imbense & Woodridge, 2009).” Opponents find that the method cannot simulate findings generated by a randomized controlled trial, and therefore exhibits significant bias. Most recently, Gary King and Richard Nielsen (2006) published a controversial paper entitled “Why propensity score should not be used for matching” and showed that the method “often accomplishes the opposite of its intended goal – increasing imbalance, inefficiency, model dependence, and bias.” Given the popularity of using PSA in social work research and the sharply contradictory opinions among the methodologists, it is important to share the latest debates on this topic with social work researchers. This workshop aims to review the latest debates about PSA to provide participants with a summary of pros and cons of using PSA, particularly, it focuses on the core methodological issue about the endogeneity problem in observational studies. Implications about the debates and caveats of applying PSA are offered at the end of the workshop.

Contents

The workshop will provide a summarized review of the following studies: 1) Imbense & Woodridge’s (2009) review of the methods of program evaluation employed in economics and three recommended methods based on the review; 2) Freedman and Berk’s (2008) simulation study about the inverse probability of treatment weights (IPTW) estimator; 3) Stürmer and colleagues’ (2006) systematic review showing that PSA does not provide substantially different estimates of treatment effects compared with conventional regression and multivariable methods; 4) Shah and colleagues’ systematic review showing that PSA gives similar results to traditional regression modeling in observational studies; and 5) Guo & Fraser’s (2015) simulation study comparing five methods of PSA with regression modeling.

Results

The literature review leads to the following findings: 1) statistical assumptions embedded in a PSA model are extremely important and require users’ attention to their violations; 2) just like conducting a sound regression analysis, users of PSA should follow the suggested steps to ensure that all analysis is conducted with rigor, particularly accomplishes its intended goal of balancing data; 3) although propensity score matching may fail in some circumstances, other PSA methods (e.g., optimal matching, IPTW, and subclassification) may not and should be seriously considered in empirical analysis; and 4) researchers should be sensitive to unobserved heterogeneity and hidden selection bias.

Implications

The workshop concludes by reviewing the 20 common pitfalls in observational studies provided by Guo & Fraser (2015) to underscore the caveats of applying PSA.

See more of: Workshops