Session: Design of Observational Studies and Its Applications to Social Work Research (Society for Social Work and Research 15th Annual Conference: Emerging Horizons for Social Work Research)

22 Design of Observational Studies and Its Applications to Social Work Research

Schedule:
Thursday, January 13, 2011: 3:30 PM-5:15 PM
Florida Ballroom III (Tampa Marriott Waterside Hotel & Marina)
Cluster: Research on Social Work Education
Speaker/Presenter:  Shenyang Guo, PhD, Professor, University of North Carolina at Chapel Hill, Chapel Hill, NC
Randomized clinical trial (RCT) is a gold standard for program evaluation and causal inference. However, in many practical settings, implementing RCT is infeasible and unethical. Although progresses have been made in developing statistical approaches such as propensity score matching, directed acyclic graphs, and marginal structural models to make causal inference more valid, there exist criticisms and skepticisms about the utility of these new methods. Recently, a movement toward developing a more rigorous design of observational studies is emerging. This workshop aims to highlight important issues on design of observational studies by focusing on the following three topics.

1. Recent Advances

In 2008, Donald Rubin, an eminent scholar of statistics at Harvard University, published an article of “For objective causal inference, design trumps analysis.” Paul Rosenbaum (2010), the original developer of propensity score matching, published an important book entitled “Design of Observational Studies.” I will offer an overview of important issues described by these works. Emphases will be given to the definition, scope, key considerations, and choices of analytic methods for design of an observational study.

2. Six Steps of the Design

Following Rubin and Rosenbaum, I will describe six steps of the design for an observational study. These steps attempt to answer the following questions: (a) How can an empirical project using observational dataset be conceptualized as having arisen from a complex randomized experiment, where the rules used to assign the treatment conditions have been lost and must be reconstructed? (b) What was the hypothetical randomized experiment that led to the observed dataset? (c) Are sample sizes in the dataset adequate? (d) Who are the decision makers for treatment assignment and what measurements were available to them? (e) Are key covariates measured well? (f) Can balance be achieved on key covariates?

3. Illustration

I will describe two distinct projects of social work research to illustrate how the above questions can be addressed and strategies to deal with empirical challenges. The first is an evaluation of a Social and Character Development (SACD) project sponsored by the U.S. Department of Education. The SACD originally employed a cluster randomized trial. However, due to a small number of schools in some school districts, the randomization compromised. Additionally, the evaluation data is challenged by attrition of study subjects and inconsistent raters rating students' outcomes. As a consequence, the evaluation is an observational study and must address various types of selections. The second is a study using data of Panel Study of Income Dynamics to investigate multigenerational dependence on cash assistance program and its impact on children's academic achievements. Since there is no way to randomly assign study subjects into poor and nonpoor conditions, the study has to rely on propensity score analysis to draw causal inference. I will focus on four design issues in the illustration: sample sizes corresponding to various types of propensity score models, choices of analytic methods (i.e., greedy matching, optimal matching, matching estimators, and propensity score weighting), post hoc sensitivity analysis to examine hidden selections, and limitations of each study design.

See more of: Workshops