199P
Program Fidelity in an Early Adolescent Intervention

Schedule:
Friday, January 16, 2015
Bissonet, Third Floor (New Orleans Marriott)
* noted as presenting author
Deborah Monahan, PhD, Professor, Syracuse University, Syracuse, NY
Vernon Greene, PhD, Professor, Syracuse University, Syracuse, NY
Background and Purpose: Small, non-profit agencies are often challenged when implementing rigorous evaluation designs intended to examine the efficacy of their programs. Often they face limitations in staff resources to conduct quality assurance and often have minimal resources to rigorously assess program efficacy. In the increasingly competitive arena of federal funding, it is important to implement programs with rigorous evaluations. This program was implemented as a randomized field experiment to provide internally valid estimates of the impact of the intervention on several outcome measures. The fidelity of the intervention procedures were analyzed in a program designed to promote adolescent health and reduce pregnancy.

Methods: Process evaluation measures included:  Evaluator Observation Logs, Youth Educator Logs and Family Contact Logs. Evaluator Observation Logs were used to track educator drift and assure greater congruence with intervention activities. Each classroom observation took the actual class time (two hours) and assessed instructional quality. Youth Educator Logs were used to assure greater congruence between intervention activities in order to follow an evidence-based model. Family Contact Logs documented the amount and nature of family contacts to facilitate recruitment and reduce attrition. These were collected in every wave of data collection for a total of 850 Family Contact Logs for 291 different households. The logs had high face validity, measuring the number, content, and duration of each contact with a particular program household.

Results: The use of these logs helped to facilitate a more systematic and replicable implementation of the program at five sites throughout the community. This process took over 18 months to accomplish and resulted in weekly team meetings to accomplish this goal. Adherence to the intervention design was challenging and involved all levels of staff in the program. The analysis of these procedures could help other small, non-profit agencies that have not had a lot of experience implementing randomized field experiments. The Youth Educator Logs, along with the Evaluator Observation and Attendance Logs were instrumental in the decision to restructure the program into an 8-week intervention. The educator’s comments revealed fatigue on the part of participants from the redundancies in the curriculum. Further, these logs consistently demonstrated that the number of lessons/activities for any one class period needed to be reduced.

Conclusions and Implications: The framework for the assessment of program fidelity was adapted from  Dusenbury, Brannigan, Falco and Hansen (2003) and show that program differentiation, as measured by the observation log (10% of classes) reached its goal by Wave 3 of data collection. The same pattern is seen in the dimension of program exposure as measured by the weekly educator log and for the quality indicator as measured by the family contact log.  When implementing an experimental design requiring procedural rigor in a traditionally service-driven agency, two key components should be present: an intensive training in the intervention replication as well as a robust process evaluation. Intensive training  should include all key players from agency directors to program and center staff and should be ongoing within the organization throughout the duration of the project.