Abstract: Examining Response Shift in the Evaluation of Self-Reported Program Impact within Child Welfare Settings (Society for Social Work and Research 21st Annual Conference - Ensure Healthy Development for all Youth)

446P Examining Response Shift in the Evaluation of Self-Reported Program Impact within Child Welfare Settings

Schedule:
Saturday, January 14, 2017
Bissonet (New Orleans Marriott)
* noted as presenting author
Jody Brook, PhD, Assistant Professor, University of Kansas, Overland Park, KS
Becci A. Akin, PhD, Assistant Professor, University of Kansas, Lawrence, KS
Margaret H. Lloyd, MS, PhD Candidate, University of Kansas, Overland Park, KS
Jackie Bhattarai, MS, Graduate Research Assistant, University of Kansas, Lawrence, KS
Tom McDonald, PhD, Dean of Research, University of Kansas, Lawrence, KS
Background:  Debate has existed in the literature for over 60 years regarding the use of prospective versus retrospective pretesting when assessing self-reported program impact.  It has been argued that due to both response shift and the involuntary/high stakes nature of many human service settings, retrospective pretesting may be more appropriate.  Despite this, use of prospective pretesting as standard practice remains largely unquestioned and is often incorporated into individual program and cross-site evaluation designs. In order to enhance the rigor of a program evaluation of the Strengthening Families Program (SFP), researchers incorporated both prospective and retrospective pretesting into the evaluation design. SFP was implemented in a Midwestern state from 2008-2012 with foster care-involved families that were substance abuse affected and seeking family reunification. 

Method: Using data provided by 411 caregivers, program effectiveness on the SFP self-report questionnaire was measured in the areas of parenting, family functioning, child behaviors, and parent substance use. Repeated measures t-tests were used to examine mean differences, and a parallel effect size was computed to observe the magnitude of difference between ratings using Cohen’s d for repeated measures design. .  

Results: Statistical significance testing of mean scores revealed no differences between the prospective and retrospective pretests for the family and parent scales, a few differences on the child scales, and notable differences in the substance use domain. Differences appeared even more prominently in effect sizes.  We found the effect sizes of retrospective pretest were larger than effect sizes of prospective pretests for all but one scale (tobacco use). Further, relative differences between the prospective and retrospective effect sizes were notable. More than half the time, the retrospective pretest effect size increased so much that, according to conventional guidelines (Cohen, 1988), it shifted from small to medium or medium to large. This pattern was most evident for the family and parent scales.

Implications: The ability to assess program impact through the analysis of change scores before and after program participation is reliant upon both the qualities of the instrument and the evaluation design characteristics, such as timing of instrument administration. Collectively, these findings suggest that specific domains were affected differentially by the measurement approach. This analysis shows that a response shift occurred on the family and parent scales, but was only partially observed on the child and substance use scales.  Upon completing the intervention, parents seem to have recalibrated their perceptions and considered their pre-intervention family interactions and parenting practices were not as high as originally reported. Response shift theory would attribute this change in perception to the exposure to the family skills training intervention, at which time program participants gained knowledge of effective parenting and became more self-aware of their own abilities. This study suggests that program evaluators should consider the advantages and disadvantages of using a retrospective pretest in design planning.