Methods: First, the evaluation infrastructure was established through content analysis of 30 funded contracts to identify themes to include in the evaluation metrics. Next, the evaluation used a mixed-methods approach, incorporating both qualitative and quantitative components. Quantitative data included secondary and primary data. Secondary data included the CTG grantees’ monthly deliverables, and quarterly outcome reports. Primary data were collected via a survey administered to key informants of the funded community-based agencies. Qualitative data were collected via semi-structured interviews with these key informants. Quantitative data was analyzed using descriptive analysis. Qualitative data was analyzed through thematic analysis.
Results: The researchers developed the evaluation infrastructure including specific evaluation metrics and scoring rubrics. The evaluation metrics include eight components: staffing, service delivery, program data quality, marketing, population focus, participant outcome, engagement, and social impact. Applying specific evaluation metrics and scoring rubrics, the researchers found that grantees did well on all scorable evaluation components, including staffing, service delivery, program data quality, marketing, population focus, participant outcome, and engagement. However, the available data did not allow for scoring several items on the evaluation metrics, such as evidence-based practice (EBP), participant demographic data, collaboration and partnership, and county-level impact. The key informant survey and interview showed most respondents were confident in their ability to achieve their program goals. They had collaborations with health organizations and community-based organizations. In addition, they reported facing COVID-19 specific challenges, and more than half discussed changes they had to make to continue their services. Grantees demonstrated high-levels of competence in conducting formative evaluations (i.e., needs assessments and process evaluations). However, there was less confidence in using summative evaluation activities (i.e., short/medium-term outcome or long-term impacts).
Conclusions and Implications: Overall, the CTG grantees were successful in accompolishing tasks in their contracts. It was evident that the grantees could benefit from training on health disparities, social determinants of health, grant proposal writing, and partnership building. The evaluation also found the limited use of EBP in CBOs. This finding highlighted the importance of helping CBOs identify and implement EBPs. Lastly, the evaluation found lack of data to measure community-level impact of CBOs on reducing health disparities. This finding highlighted the importance of establishing data dashboards using data from county public health departments. Moreover, accesssing such data at census-tract level or zip-code level will allow evaluators to assess community-level impact of CBOs. Overall, this study demonstrated the use of science to support CBOs and state grant makers to eliminate health inequalities.