Abstract: Rater Effects and Dosage: Lessons Learned from a Recess-Based Intervention to Build Protective Factors (Society for Social Work and Research 20th Annual Conference - Grand Challenges for Social Work: Setting a Research Agenda for the Future)

Rater Effects and Dosage: Lessons Learned from a Recess-Based Intervention to Build Protective Factors

Schedule:
Thursday, January 14, 2016: 2:30 PM
Meeting Room Level-Mount Vernon Square A (Renaissance Washington, DC Downtown Hotel)
* noted as presenting author
Sarah Accomazzo, PhD, Postdoctoral Scholar, University of California, Berkeley, Oakland, CA
Valerie B. Shapiro, PhD, Assistant Professor, University of California, Berkeley, Berk, CA
Sophie Shang, International Baccalaureate Diploma, Undergraduate Student, University of California, Berkeley, Berkeley, CA
Jennette Claassen, MSW, Director of Evaluation, Playworks, Oakland, CA
Background and Purpose: The Grand Challenge of “Unleashing the Power of Prevention” calls for the (1) systematic assessment of protective factors to guide the implementation of effective prevention programs, and (2) infrastructure to support the high quality implementation of preventive interventions (Hawkins, et al., 2015).

The Devereux Student Strengths Assessment (DESSA) is a behavior rating scale used to assess protective factors within children in kindergarten through 8th grade (LeBuffe, et al., 2009). This psychometrically sound tool has been expert-reviewed as one of the most practical tools available for assessing social emotional competence in youth (Haggerty, et al., 2011; Denham, 2015). The DESSA can be used as a needs assessment to guide the implementation of prevention programs.

One school-based prevention program to build youth protective factors is Playworks (playworks.org). Playworks places full-time staff (“coaches”) in low-income schools to provide opportunities for organized play, including recess activities and after-school programming training older students (“junior coaches”) to engage as play leaders. The program theory postulates that this programming will increase protective factors of junior coaches. However, it is challenging to find a practical needs assessment tool intended for staff and to implement Junior Coach programming consistently across schools.

This study uses DESSA data from the needs assessment of Junior Coaches. Although the DESSA has been normed for use with teachers and staff, no inter-rater reliability study has been conducted that directly compares teachers and staff ratings of the same child. This paper then explores levels of implementation by measuring the dosage of Playworks’ Junior Coach programming.

Methods: The needs assessment included students at 31 Northern California schools.  Junior Coaches who were included in the needs assessment (N=430)  were a racially diverse (e.g., 11% Asian, 11% Black; 41% Hispanic; 9% White) group of 4th (41%) and 5th (59%) grade students who were 53% female. Sixty percent of the students were eligible for free or reduced price lunch.  Descriptive statistics and multivariate analyses were conducted to explore differences between teacher and staff raters and programming dosage.

Results:  Ratings of Junior Coaches’ overall social emotional competence differed significantly between teachers and staff (d=.56, p<0.001). Teachers rated junior coaches more favorably (T-Score 54.5, SD=9.3) than the Playworks staff (T-score 52, SD= 7.7). According to teacher raters, 29% of Junior Coaches have stronger-than-typical protective factors, 65% have typical levels of protective factors, and 8% have weaker-than-typical protective factors.

Youth were offered varying doses of Junior Coach programming (range: 1-38 hours). On average, youth attended 87% (21 hours) of intervention sessions (e.g. job skills, conflict management, leadership training) offered. Sixty percent of the variance in total number of intervention hours is attributable to schools. Quality of  intervention implementation was generally rated as “meeting expectations” by external observers (range 1-4; M=3.01, SD=05).

Conclusion:

Dosage widely varied by school, suggesting that closer implementation monitoring may be needed. Although differences between teachers and staff are consistent, staff appear reliable raters given their approximation to the national norms.