Abstract: The Evidence for Risk Assessment Tools Used in CPS Investigations: A Scoping Review (Society for Social Work and Research 27th Annual Conference - Social Work Science and Complex Problems: Battling Inequities + Building Solutions)

All in-person and virtual presentations are in Mountain Standard Time Zone (MST).

SSWR 2023 Poster Gallery: as a registered in-person and virtual attendee, you have access to the virtual Poster Gallery which includes only the posters that elected to present virtually. The rest of the posters are presented in-person in the Poster/Exhibit Hall located in Phoenix A/B, 3rd floor. The access to the Poster Gallery will be available via the virtual conference platform the week of January 9. You will receive an email with instructions how to access the virtual conference platform.

The Evidence for Risk Assessment Tools Used in CPS Investigations: A Scoping Review

Schedule:
Friday, January 13, 2023
Encanto B, 2nd Level (Sheraton Phoenix Downtown)
* noted as presenting author
Claire McNellan, MPH, Doctoral Student, University of North Carolina at Chapel Hill, Chapel Hill, NC
Daniel Gibbs, MSW, JD, Doctoral Candidate, University of North Carolina at Chapel Hill, Chapel Hill, NC
Emily Putnam-Hornstein, PhD, John A. Tate Distinguished Professor for Children in Need, University of North Carolina at Chapel Hill, Chapel Hill, NC
Ann Knobel, Graduate Student, The Pennsylvania State University
Background and Purpose. Risk assessment tools are commonly used by CPS agencies to augment decision-making about alleged child maltreatment. The design and use of these tools are evolving alongside technological advances that offer new approaches to using administrative data to evaluate risk factors in a household. Additionally, risk assessment tools are an essential component of the implementation of the FFPSA legislation. As such, conversations about risk assessment tools have shifted in focus such that a compilation and reevaluation of the literature is warranted. In this scoping review, we comprehensively surveyed the quantitative literature evaluating tools used by CPS agencies to measure risk and the evidence that may support (or not support) the validity and reliability of these tools both generally and in demographic subgroups.

Methods. We conducted a scoping review because it is a flexible approach that can be used to systematically map evidence when the relevant body of literature is wide ranging and heterogeneous but addresses a specific, practice-oriented question (Arksey & O’Malley, 2005; Levac et al., 2010). We developed a protocol in alignment with PRISMA-ScR (Tricco et al., 2018). Included research had to be a quantitative evaluation of a risk assessment tool used by a CPS agency and published between 1990 and May 2021. We used a multiphase selective approach to screening with at least two screeners. In total, 2,155 unique studies were subjected to double-independent title and abstract screening and 160 studies were subject to full-text review. A standardized data extraction spreadsheet was used to collect information from supporting studies.

Results. A final sample of 25 studies was included for review and data extraction: 21 peer-reviewed journal articles, two doctoral dissertations, and two reports. We present an overview of the consensus, actuarial, and automated algorithmic risk assessment tools included in the studies. We then summarize the methods used to assess validity and reliability and review the conceptual dimensions of risk included in each tool. Much of the literature focused on relative validity and reliability—specifically, comparisons between actuarial and consensus-based tools. With few exceptions, there is a dearth of evidence that tools are equally predictive across subgroups.


Conclusions and Implications. Much of the literature about risk assessment tool validation is dated and has not been updated, which may limit the ability to generalize this already dubious evidence of effectiveness to current practice conditions. Several studies are described as pilot or preliminary studies, though no further research confirmed the findings. Studies that assess tool validity and reliability are heterogeneous in study design, suggesting a lack of agreement about how to assess tools. The field should clarify objectives for the use of risk assessment tools and establish consensus around evidence standards. Agencies should be cautious about overreliance on tools for which evidence is limited.