Abstract: Building Evidence in Yarh: Usability Tests, Formative Evaluations, and the Evidence-Building Trajectory in Child Welfare (Society for Social Work and Research 25th Annual Conference - Social Work Science for Social Change)

All live presentations are in Eastern time zone.

Building Evidence in Yarh: Usability Tests, Formative Evaluations, and the Evidence-Building Trajectory in Child Welfare

Schedule:
Wednesday, January 20, 2021
* noted as presenting author
Liz Clary, MA, Researcher, Mathematica, Chicago, IL
Background and purpose: YARH has two main goals: The first is for grantees to design comprehensive service models intended to prevent homelessness among youth and young adults involved in the child welfare system. The second is to test these models to build the evidence base on promising strategies that support these youth. The Children’s Bureau, within the Administration for Children and Families (U. S. Department of Health and Human Services) first funded two-year planning grants to 18 organizations. Six of them received funding for the second phase, a four-year initial implementation grant (2015 – 2019). Mathematica was contracted to provide evaluation technical assistance to the YARH grantees to help them test components of their interventions and design and execute their formative evaluations.

Methods: After providing an overview of what YARH is, we will describe how Mathematica partnered with each YARH grantee to provide evaluation-related technical assistance. Evaluation technical assistance included helping grantees identify their target populations, develop theories of change and logic models, and learn about their target populations. During the second phase of YARH, dedicated liaisons from Mathematica worked with grantees on “usability tests” or small tests of very specific pieces of the intervention to make sure they worked. We supported grantees as they started using a CQI system to help monitor fidelity and other elements of the implementation, and they adjusted the models as needed. We worked with grantees to develop a formative evaluation to learn whether the interventions could be implemented as intended in the communities. We encouraged grantees to focus on the following questions:

  • What do you hope to learn about your programs’ implementation?
  • How will you know if the key components of your program are being delivered as intended?
  • How will you know if your program is starting to show the intended effects?

Results: Grantees undertook an array of usability tests to determine the feasibility of specific elements of their models. Additionally, they conducted formative evaluations to understand what supports and structures were needed to implement their models with fidelity. Two of those grantees will shared what they learned through their formative evaluations.

Conclusions and implications: Providing a structure for grantees to learn about and test components of their interventions helped them strengthen their interventions and conduct formative evaluations that begin to show that they are having the desired effects.