Society for Social Work and Research

Sixteenth Annual Conference Research That Makes A Difference: Advancing Practice and Shaping Public Policy
11-15 January 2012 I Grand Hyatt Washington I Washington, DC

16606 Using Instrumental Variable Models to Learn From Experimental Evaluation of Social Programs

Schedule:
Saturday, January 14, 2012: 5:30 PM
Independence E (Grand Hyatt Washington)
* noted as presenting author
Mark E. Courtney, PhD, Professor, University of Chicago, Chicago, IL
Andrew E. Zinn, PhD, Senior Researcher, University of Chicago, Chicago, IL
Purpose: Even within the context of randomized experimentation, questions may arise concerning the impact of an evaluated program that cannot be directly answered based on random assignment itself. This paper presents examples of the use of instrumental variable (IV) models to address these types of questions using data from the federal Multisite Evaluation of Foster Youth Programs. In the first example, IV models are used to assess the impact of a classroom-based independent living skills training program in the presence of high levels of program participation among control group youth. In the second example, IV models are used to try to understand the extent to which a positive relationship between receipt of case management services and subsequent college attendance is moderated by youths' involvement with the child welfare system after age 18.

Methods: Data for these analyses come from two evaluations of independent living programs for foster youth. The first program, Life Skills Training (LST), is a classroom-based life skills training program serving foster youth in Los Angeles. The second program, Adolescent Outreach Program (AOP), provides intensive case management to youth placed in family foster homes in Massachusetts. In both evaluation sites, youth were randomized to program and control groups. Youth in both groups were interviewed in person three times over a period of two years, and data were collected concerning a number of different outcome domains related to the transition to independence. Baseline response rates averaged over 90 percent and follow-up response rates averaged over 80 percent.

In the evaluation of LST (N = 411), 24 percent of control group youth violated the random assignment protocol by participating in the LST program. As a result, differences in outcomes between assignment groups were not thought to be reliable indicators of the impact of LST program participation. Using random assignment as a means to instrument program participation, IV models were estimated to examine the impact of program participation on youth outcomes.

In the evaluation of AOP (N = 179), program youth were found to be more likely than control youth to attend college and to stay in foster care past their 18th birthday, which raised questions about whether continued child welfare system involvement, rather than AOP per se, was the operative mechanism leading to college attendance. Using random assignment as a means to instrument continued system involvement, IV models were estimated to examine its impact on college outcomes.

Results: Results of the IV models of LST program participation and youth outcomes are consistent with findings based on standard intent-to-treat analyses: no program impact on any targeted outcome. Results of the IV models of continued child welfare system involvement and youths' college outcomes suggest that continued involvement significantly moderates the relationship between AOP participation and college outcomes.

Implications: This study illustrates the utility of IV models in experimental evaluation of social programs. IV models can generate unbiased estimates of program effects even when adherence to the experimental condition is not ideal and can help identify mechanisms underlying experimental effects.

<< Previous Abstract | Next Abstract