Abstract: Compared to What? Exploring the Value of an RCT in Interpreting Program Delivery in a Home Visiting Context (Society for Social Work and Research 23rd Annual Conference - Ending Gender Based, Family and Community Violence)

Compared to What? Exploring the Value of an RCT in Interpreting Program Delivery in a Home Visiting Context

Schedule:
Friday, January 18, 2019: 6:15 PM
Union Square 1 Tower 3, 4th Floor (Hilton San Francisco)
* noted as presenting author
Rob Fischer, PhD, Associate Professor, Case Western Reserve University, Cleveland, OH
Elizabeth Anthony, Ph.D., Research Assistant Professor, Case Western Reserve University, Cleveland, OH
Background and purpose: This paper highlights the value of comparative methodologies in the evaluation of human services programs. Given the realities of under-staffed, under-resourced social service agencies, program evaluators are often unable to implement a rigorously designed evaluation, settling instead for single group pre-test, post-test comparisons. We present the results and challenges of a longitudinal, randomized field experiment to test the effectiveness of a stress-reduction curriculum embedded in an existing home visiting program for low-income mothers. In light of attrition and null findings even among a subgroup of women who completed the intervention, the evaluators reflect on the methods to judge the efficacy of home visiting as a model of service delivery.

Methods: The study successfully enrolled 311 mothers involved in home visiting into a randomized experiment to test the effectiveness of a specialized curriculum (3 for Me!) embedded in home visiting. Mothers were assessed at baseline and using a repeated measures design over the six month potential exposure to the curriculum. Outcomes assessed included retention in home visiting, achievement of personal goals, mother's perceived stress, emotions, and affect, as well as mother's experience with the curriculum.

Results: In regard to the primary identified outcomes, the following findings emerged: (1) retention in home visiting – No differences were seen between the groups but retention in the study groups (both Treatment and Control ) were slightly higher than in standard home visiting; (2) achievement of personal goals – For mothers in the Treatment group, goal completion increased over the course of the study exceeding 80% by the final session; however, goal completion was not associated with positive changes in perceived stress; (3) perceived stress, emotion and affect – no statistically significant differences emerged between groups in terms of perceived stress and positive and negative affect; and (4) program experience – Treatment moms rated their experience with the PMI curriculum highly with more than 75% rating the content as helpful and that they were likely to use the content in the future.

Conclusions and implications: Given the realities of under-staffed, under-resourced social service agencies, program evaluators are often unable to implement a rigorously designed evaluation, settling instead for single group pre-test, post-test comparisons. Yet, given the financial and human investments, and the potential impacts on the wellbeing of those served, evaluators are responsible for crafting a comparison group that minimizes selection bias concerns in an attempt to estimate the elusive counterfactual. This paper compares the findings from an RCT to what would have been reported had the study been conducted as a single group pre/post design. This analysis demonstrates that the RCT prevented the partners from drawing improper conclusions about program effectiveness. We also explore the issues associated with crafting meaningful findings from an RCT that results in no difference between groups. We describe avenues to disseminate this type of experience and lessons learned. We suggest that after thoroughly examining the possibility of Type II Error, evaluators must further explore treatment elements to ensure findings can inform practice.