Abstract: Variations in Outcome Evaluation Capacity and Practices Among Human Service Organizations (Society for Social Work and Research 22nd Annual Conference - Achieving Equal Opportunity, Equity, and Justice)

Variations in Outcome Evaluation Capacity and Practices Among Human Service Organizations

Schedule:
Sunday, January 14, 2018: 8:22 AM
Independence BR B (ML 4) (Marriott Marquis Washington DC)
* noted as presenting author
Alicia Bunger, MSW, PhD, Assistant Professor, Ohio State University, xxxxxxx, OH
Erin Tebben, PhD Student, Ohio State University, Columbus, OH
Hannah MacDowell, BA, Research Assistant, Ohio State University, Columbus, OH
Ashleigh Hodge, MSW, Doctoral Student, Ohio State University, Columbus, OH
Yiwen Cao, MSW, Graduate Student, Ohio State University, Columbus, OH
Christy Kranich, MSW, Evaluation Project Coordinator, Ohio State University, Columbus, OH
Background:

Human service organizations encounter increasing pressure to demonstrate improved client outcomes as a result of their services. In line with these accountability pressures, and consistent with an institutional perspective, organizations often collect vast amounts of information about clients and services to comply with monitoring and reporting expectations. However, it has been well documented that organizations struggle to use data to draw meaningful inferences about program effectiveness, which suggests limited evaluation capacity in the field (Carman, 2007; Despard, 2016). This study examines and explains variations in evaluation capacity among human service organizations, which can inform organizational interventions for building evaluation capacity.

Methods:

This study takes place within a regional system of community-based children and youth service organizations contracted by a public child welfare agency within the urban-Midwest (U.S.). A sequential explanatory mixed-methods design (quan-qual) was used (Cresswell & Plano-Clark, 2007). Quantitative data were gathered from 29 organizational directors (57% response rate) in an online survey assessing evaluation capacity. Responses were used to categorize organizations into two evaluation capacity groups: high (excellent or good) and low (needs improvement or significant attention). Next, organizations from each group were purposefully selected for follow-up interviews (n=8) with management teams. Interviews gathered examples of recent evaluations, and explored the rationale for evaluation decisions. Interviews were recorded, professionally transcribed, and coded using an iterative and cross-case comparative approach.

Results:

Overall, most organizations rated their evaluation capacity favorably. The majority reported collecting information about outcomes (83%), using this information to refine existing programming (90%), and considered their evaluation capacity to be good or excellent (65%). Qualitative analysis showed that teams from high capacity organizations (n=5) tended to describe evaluating program effectiveness, internal quality improvement procedures, and ongoing external relationships with funders that enhance their evaluation capacity. These organizations also tended to attribute their evaluation practices to organizational cultures that value evaluation and quality improvement, suggesting internal motivations. In contrast, teams from low capacity organizations (n=3) tended to discuss the needs for developing a strong internal evaluation culture, and noted that their evaluation practices were often in response to external pressures from funders and accrediting bodies.

Conclusion:

Despite reports in the literature that highlight poor organizational capacity to evaluate programs, most organizations within this study rated their abilities favorably, which suggests the need for further research on organizational leaders’ evaluation understanding and knowledge. In this study, organizations with high self-reported evaluation capacity were distinguished by internal cultures that value evaluation and that leverage funder relationships to support evaluation activities, whereas organizations motivated by external pressures tended to have lower capacity. Results suggest limits to institutional pressures for building evaluation capacity. Instead, as funding streams become more closely tied to programmatic outcomes, funders might consider close collaboration with organizations to eliminate evaluation barriers, and emphasize use of outcome data for continuous quality improvement and program effectiveness. Since strong evaluation has been linked with improved programming and implementation (Brown & Kiernan 2001), eliminating evaluation barriers within human service organizations can have profound implications on programmatic quality and survival.