Effects of a Random Assignment Capacity-Building Intervention on Nonprofits' Evaluation Capacity Using Efficacy Subset Analysis

Schedule:
Thursday, January 15, 2015: 3:30 PM
Preservation Hall Studio 9, Second Floor (New Orleans Marriott)
* noted as presenting author
Mathieu R. Despard, MSW, Clinical Associate Professor, University of North Carolina at Chapel Hill, Chapel Hill, NC
Background and Purpose:

At the organizational level, evidence-based practice (EBP) is conceptualized as the use of best available evidence to improve programs (Austin, 2008). Engaging in program evaluation and developing a learning culture in human service organizations (HSOs) may help promote EBP (Austin, 2008; Gambrill, 2006; Maynard, 2010; Plath, 2012). However, nonprofit HSOs, especially smaller ones, struggle with evaluation (Innovation Network, 2012; Leake et al., 2007; Pejsa, 2011) but may benefit from capacity building assistance. The purpose of this study was to assess whether capacity building can help smaller nonprofit HSOs engage in evaluation.

Methods:

Data for this study was from the Compassion Capital Fund Demonstration Program Impact Study (ICPSR 29481). Nonprofit HSOs were randomly assigned to a treatment (N=237) or control (N=217) group during an application process for receiving capacity building assistance (group training, technical assistance, and/or targeted funding) from ten intermediary organizations. Organizational representatives were surveyed concerning organizational practices in five capacity areas at baseline and follow-up (15 months later). Building on findings of Minzner et al. (2013) concerning overall treatment effects, this study used subset efficacy analysis, a different method for estimating difference-in-differences, and controlled for the amount of grant assistance received. Logistic regression with covariance control, sampling weights for nonresponse at follow-up, and robust standard errors to adjust for clustering by intermediary was used to assess to assess outcomes related to engaging in evaluation.

Results:

Comparing organizations in the treatment group (N=120) that received evaluation assistance to organizations in the control group (N=141) that were not exposed to evaluation assistance, statistically significant effects were found for measuring (β=.952, SE=.438, OR=2.59, p<.05) and recording (β=1.12, SE=.473, OR=3.08, p<.05) client outcomes, but not for receiving client feedback about services. Younger (5 years old or less) organizations were less likely to measure outcomes (p<.001) and receive client feedback (p<.05). Smaller (less than $100,000 in revenue) organizations were less likely to measure outcomes (p<.05). Treatment group organizations were more likely than control group organizations to have increased their level of focus in strengthening the organization’s overall evaluation capacity (β=.689, SE=.259, OR=1.99, p<.01) and incorporating new service approaches to improve quality and effectiveness (β=.673, SE=.287, OR=1.96, p<.05) but not for collecting more information about clients.

Conclusions and Implications:

Using different methods for assessing difference-in-differences, controlling for grant amounts, and using efficacy subset analysis rather than overall treatment effects compared to a prior study (Minzner et al, 2013), this study found different results concerning engagement in evaluation. Capacity building assistance that targets engagement in evaluation may be more effective in helping small nonprofit HSOs move closer toward engaging in EBP than broad-based assistance. As federal and state governments continue to encourage EBP and tie it to funding, they should also support capacity building for nonprofit HSOs. However, nonprofit HSOs with low revenues and that have been in operation for short periods may benefit less from this assistance.