Abstract: Early Testing Mechanisms for Assessment of Initial Implementation Outcomes on Adult & Child Survivor-Centered Approach (Society for Social Work and Research 25th Annual Conference - Social Work Science for Social Change)

All live presentations are in Eastern time zone.

542P Early Testing Mechanisms for Assessment of Initial Implementation Outcomes on Adult & Child Survivor-Centered Approach

Schedule:
Tuesday, January 19, 2021
* noted as presenting author
Linda Chimwemwe G. Banda, MSW, Graduate Research Assistant, University of Kansas, Lawrence, KS
Becci A. Akin, PhD
Juliana M. Carlson, PhD
Cheryl Holmes, MPA, Research Project Director, University of Kansas, Lawrence, KS
April Diaz, MSW, MA, Graduate Research Assistant, University of Kansas School of Social Welfare, Lawrence, KS
Background/Purpose:

Despite common co-occurrence of domestic violence and child welfare involvement, few interventions have been rigorously tested to address the needs and improve outcomes of this target population. In response, the Children’s Bureau funded the Quality Improvement Center on Domestic Violence in Child Welfare (QIC-DVCW) to test an adult and child survivor-centered approach (Approach) to help families experiencing domestic violence who are involved in the child welfare system. Following implementation science frameworks that emphasize staged implementation (Fixsen, Blase & Van Dyke, 2019), this paper presents findings on the QIC-DVCW’s initial implementation and early testing. The aim of this study was to build knowledge about early testing as a mechanism for assessing the initial implementation and using implementation science best practices of staging and improvement cycles.

Methods: Five early testing metrics were established for monthly tracking in each of three QIC-DVCW project sites using the Approach. Sites represented one Midwestern state, and two Atlantic coast states. Early testing metrics comprised implementation data on coaching (i.e., percentage of supervisors participating in coaching) and fidelity because these were viewed as markers of successful implementation and necessary evidence for ensuring high-fidelity implementation. Four fidelity metrics measured: 1) whether fidelity was assessed by supervisors; 2) whether the fidelity checklist was fully completed; 3) domains of fidelity scores that were highest lowest on a nine-point scale; and (4) domains of fidelity scores that were lowest on a nine-point scale. Data on fidelity were also used as a feedback loop to coaches so they would know the domains/topics needing more support. Data on metrics were collected over six months, analyzed with descriptive statistics, and reported back to sites monthly. Sites used early testing metrics to understand achievement of implementation milestones, monitor fidelity, and address implementation obstacles.

Results:

Across three sites, results on coaching metrics showed that the percentage of supervisors participating in coaching was much lower than expected and inconsistent over time. With fidelity checklists, we found that many were not completed when expected and completion rates fluctuated dramatically month-to-month; however, those completed were not missing data. Two fidelity domains scored the highest across sites: ‘Working with adult survivors’ and ‘Overall.’ The domain ‘Work with person using violence and coercion’ scored the lowest across sites. Results were shared monthly with local implementation teams where improvement cycles could be designed and initiated. In one site, a project champion facilitated the use of early testing data and demonstrated improvements in fidelity metrics.

Discussion/Implication:

This study adds to the growing literature on implementation. Given that initial implementation is known to be troublesome because practices are not yet stable or routine (Fixsen et al, 2019), monitoring early testing metrics helped with understanding which implementation components were going as expected, whether fidelity was at acceptable levels, and which domains of fidelity were most challenging to implement. These data-driven strategies also demonstrate how project champions and continuous improvement cycles may be needed to support implementation of innovative practices. Implications for research, policy, and practice will be discussed.