Abstract: Evaluation Efficacy in an Intervention Program for Adolescents (Society for Social Work and Research 20th Annual Conference - Grand Challenges for Social Work: Setting a Research Agenda for the Future)

247P Evaluation Efficacy in an Intervention Program for Adolescents

Schedule:
Friday, January 15, 2016
Ballroom Level-Grand Ballroom South Salon (Renaissance Washington, DC Downtown Hotel)
* noted as presenting author
Deborah J. Monahan, PhD, Professor, Syracuse University, Syracuse, NY
Vernon Greene, PhD, Professor, Syracuse University, Syracuse, NY
Background and Purpose: Small, non-profit agencies are often challenged when implementing rigorous evaluation designs intended to examine the efficacy of their programs. Often they face limitations in staff resources to conduct quality assurance and often have minimal resources to rigorously assess program efficacy. This program was implemented as a randomized field experiment to provide internally valid estimates of the impact of the intervention on several outcome measures. The intervention procedures were analyzed after completion of a 3-year federally funded program designed to promote adolescent health and reduce pregnancy.

Methods: Evaluation measures included:  Evaluator Observation Logs, Youth Educator Logs and Family Contact Logs. Evaluator Observation Logs assessed educator drift and assured greater congruence with intervention activities. Classroom observation took the actual class time (two hours) and assessed instructional quality. Youth Educator Logs assessed congruence between intervention activities and an evidence-based model. Family Contact Logs documented the amount and nature of family contacts to facilitate recruitment and reduce attrition. These were collected in every wave of data collection for a total of 850 Logs for 291 different households. The logs had high face validity, measuring the number, content, and duration of each contact with a particular program household.

Results: The use of these logs helped to facilitate a more systematic and replicable implementation of the program at five sites throughout the community. This process took over 18 months to accomplish and resulted in weekly team meetings to accomplish this goal. Adherence to the intervention design was challenging and involved all levels of staff in the program. The analysis of these procedures could help other small, non-profit agencies that have not had a lot of experience implementing randomized field experiments. The Youth Educator Logs, along with the Evaluator Observation and Attendance Logs were instrumental in the decision to restructure the program into an 8-week intervention. The educator’s comments revealed fatigue on the part of participants from the redundancies in the curriculum. Further, these logs consistently demonstrated that the number of lessons/activities for any one class period needed to be reduced.

Conclusions and Implications: The framework for the assessment of program fidelity was adapted from  Dusenbury, Brannigan, Falco and Hansen (2003) and show that program differentiation, as measured by the observation log (10% of classes) reached its goal by Wave 3 of data collection. The same pattern is seen in the dimension of program exposure as measured by the weekly educator log and for the quality indicator as measured by the family contact log.  When implementing an experimental design requiring procedural rigor in a traditionally service-driven agency, two key components should be present: an intensive training in the intervention replication as well as a robust process evaluation. Intensive training  should include all key players from agency directors to program and center staff and should be ongoing within the organization throughout the duration of the project.