Methods: The study team distributed an online survey to child welfare legal professionals within the United States between September 2024 and February 2025. The survey contained questions regarding professionals’ (1) background, (2) exposure to AI tools in practice, (3) attitudes and knowledge regarding AI, and (4) willingness to rely on tools described in vignettes with varying rates of error. A representative sample of 76 legal professionals completed the questionnaire. The research team conducted descriptive, subgroup, and difference-in-difference regression analyses to understand patterns in respondent experiences and the change in their level of trust as the error rate of hypothetical AI tools was manipulated.
Results: Only 16% of respondents reported knowledge that an AI tool had been used in one of their cases, and less than 10% reported using one for their own case decision-making. Less than one-third of those who reported AI use stated that it was discussed in court proceedings. Overall attitudes toward AI tools were slightly positive, but respondents had moderate-to-low levels of knowledge and consistently reported concerns about ethical impacts. Younger, male, and state agency-employed attorneys were more likely to express positive views or knowledge. Most practitioners had high standards for AI models’ accuracy, expressed that model explainability was a crucial component of use in real-world cases, and believed AI’s use to be more appropriate in safety and well-being decisions rather than high-stakes permanency decisions such as the termination of parental rights. Female and more experienced attorneys experienced the greatest relative gains in trust toward AI as hypothetical tools’ accuracy improved.
Implications: These results suggest that reliance on AI—or at least explicit knowledge and discussion of AI-driven decision-making—remains fairly low in high-stakes child welfare and legal practice settings. As legal professionals face the challenge of addressing AI use in such environments, it is critical that educators and policymakers prepare them for the challenge of ensuring that systems’ standards for ethical and accurate high-stakes decision-making are maintained. Further, “shadow AI practices” covertly used by practitioners in many jurisdictions must be brought into the open through rigorous yet safe discussions so that optimal AI integration practices can be understood and advanced in local contexts. Without such efforts, these technologies risk becoming inadequate and invisible substitutes for human judgment and high-quality casework that will impact clients’ lives.
![[ Visit Client Website ]](images/banner.gif)