Abstract: Understanding AI Integration in Child Welfare Practice: Experiences and Attitudes of Legal Professionals (Society for Social Work and Research 30th Annual Conference Anniversary)

Understanding AI Integration in Child Welfare Practice: Experiences and Attitudes of Legal Professionals

Schedule:
Thursday, January 15, 2026
Marquis BR 13, ML 2 (Marriott Marquis Washington DC)
* noted as presenting author
Matthew Trail, Research Fellow, Max Planck Institute, Germany
Daniel Gibbs, Assistant Professor, University of Georgia, Athens, GA
Background: AI-based tools have become a part of daily life in many human services systems. However, empirical evidence regarding the experiences of practitioners using them for real-world decisions is scarce. This research gap makes it difficult for educators, policymakers, and practitioners to craft informed responses to AI integration that uphold collective values of ethics and effectiveness. In particular, challenges within the child welfare and juvenile justice systems are particularly complex as they sit at the high-stakes interprofessional intersection between social work and the law. This presentation explores legal professionals’ exposure to AI in these systems, their attitudes toward AI tools, and their views regarding the accuracy required for AI tool reliance in real-world decisions.

Methods: The study team distributed an online survey to child welfare legal professionals within the United States between September 2024 and February 2025. The survey contained questions regarding professionals’ (1) background, (2) exposure to AI tools in practice, (3) attitudes and knowledge regarding AI, and (4) willingness to rely on tools described in vignettes with varying rates of error. A representative sample of 76 legal professionals completed the questionnaire. The research team conducted descriptive, subgroup, and difference-in-difference regression analyses to understand patterns in respondent experiences and the change in their level of trust as the error rate of hypothetical AI tools was manipulated.

Results: Only 16% of respondents reported knowledge that an AI tool had been used in one of their cases, and less than 10% reported using one for their own case decision-making. Less than one-third of those who reported AI use stated that it was discussed in court proceedings. Overall attitudes toward AI tools were slightly positive, but respondents had moderate-to-low levels of knowledge and consistently reported concerns about ethical impacts. Younger, male, and state agency-employed attorneys were more likely to express positive views or knowledge. Most practitioners had high standards for AI models’ accuracy, expressed that model explainability was a crucial component of use in real-world cases, and believed AI’s use to be more appropriate in safety and well-being decisions rather than high-stakes permanency decisions such as the termination of parental rights. Female and more experienced attorneys experienced the greatest relative gains in trust toward AI as hypothetical tools’ accuracy improved.

Implications: These results suggest that reliance on AI—or at least explicit knowledge and discussion of AI-driven decision-making—remains fairly low in high-stakes child welfare and legal practice settings. As legal professionals face the challenge of addressing AI use in such environments, it is critical that educators and policymakers prepare them for the challenge of ensuring that systems’ standards for ethical and accurate high-stakes decision-making are maintained. Further, “shadow AI practices” covertly used by practitioners in many jurisdictions must be brought into the open through rigorous yet safe discussions so that optimal AI integration practices can be understood and advanced in local contexts. Without such efforts, these technologies risk becoming inadequate and invisible substitutes for human judgment and high-quality casework that will impact clients’ lives.