Saturday, 14 January 2006 - 12:00 PM
54P

Practice Wisdom in Program Evaluation: Views of Exemplary Prevention Practitioners

Arlene Weisz, PhD, Wayne State University and Beverly Black, Wayne State University.

Purpose: This poster addresses the scarcity of published research about what really works to prevent youth dating violence and sexual assault. The few existing evaluation studies assessing intermediate and long-term effects of dating violence prevention programs often report inconsistent results (Frazier, Valtinson & Candell, 1995; Lonsway, 1996), and almost never provide details about program content. Practitioners rarely have time or opportunities to share their thoughts on program evaluation. This presentation of practice wisdom from respected youth dating violence and sexual assault prevention programs across the U.S. compiles practitioners' views on program evaluation. Method: We sought names of exemplary local programs from statewide sexual assault and domestic violence coalitions to interview staff about their programs. We interviewed staff from 52 programs from 22 states. The interview guide included questions about whether and how the practitioners conducted program evaluations and about their successes and challenges in doing these evaluations. The interviews were transcribed, and the responses of the practice experts were organized according to major issues and themes. Results: Most programs are expected to do program evaluation in order to obtain funding, in spite of limited funds and time to do these evaluations. Most prevention programs are presented to youth in schools, where time is limited. Program evaluations that attempt to include both pre- and posttests place strains on programmers who are already trying to present complex and controversial information during the short periods of time allotted to them. Interviewees held conflicting views about whether required evaluations yield information that truly helps them improve their programs. Very few practitioners attempted quasi-experimental evaluations and none of the respondents conducted an experimental evaluation. Practitioners who hired outside evaluators or received consultation from researchers hired by state coalitions seemed to have more confidence in evaluation findings than those who conducted evaluations without researchers' help. Some practitioners complained that a survey could not truly capture the effective aspects of their programs. They also noted the difficulties in measuring behavioral changes, especially in measuring whether violent behavior was prevented. Some practitioners believed that if a few victimized youth seek help because of a prevention presentation, the program has been effective. Conclusions: Programs sometimes gain reputations as exemplary without doing comprehensive evaluations. Some excellent programs seem to rely on the charisma and experience of their staff, and these programs may be difficult to continue or replicate. While researchers provided valued assistance to some programs, a gulf continues to exist between researchers and practitioners in this field (National Violence Against Women Prevention Research Center, 2001; Reid, 1994). Practitioners often lack confidence in their own evaluation skills but have difficulty understanding the language and concepts of researchers. Social work researchers who understand practitioners' needs and views on evaluation are in a better position to help infuse thoughtful evaluation practices into prevention programming.

See more of Poster Session II
See more of Oral and Poster

See more of Meeting the Challenge: Research In and With Diverse Communities (January 12 - 15, 2006)