Predictive risk models (PRMs) are increasingly used in child welfare systems to assess the likelihood of future maltreatment. While often framed as neutral tools to support decision-making, these models draw on racially stratified data and operate within systems already shaped by surveillance and structural inequity. This study investigates how one PRM used by New York City’s Administration for Children’s Services (ACS) functions within an algorithmic ecology of regulation, with particular attention to racialized outcomes and the lived experiences of Black parents. Research questions include: (1) How is the PRM designed and operationalized? (2) How does the model engage with race, racism, and risk? (3) What are the lived experiences of parents who were assigned high-risk classifications? (4) What are the ethical implications of using predictive models in preventive services?
Methods:
This qualitative study used a two-pronged design. First, a document analysis was conducted on over 1,200 pages of public records, technical reports, contracts, policies, and internal presentations related to ACS’s PRM. Second, 39 in-depth interviews were conducted with 12 Black parents impacted by the model. Each participant completed three interviews to support reflection and phenomenological depth. Reflexive thematic analysis was applied to documents, guided by the algorithmic ecology framework, while interpretative phenomenological analysis (IPA) was used to explore parent narratives. Critical Race Theory informed both analytic approaches.
Results:
Findings reveal that the PRM embeds systemic racism through proxy variables, including zip code, service use, and case history, and operates within data infrastructures that disproportionately impact Black families. Although race is not a direct variable in the model, parents described “feeling” the effects of algorithmic judgment through service mandates, caseworker interactions, and risk-based decisions they could not see but were acutely aware of. Several parents reported that being flagged by ACS increased their priority for public housing, linking predictive tools to broader systems of carceral care. Others described coercive experiences during investigations, particularly in contexts of domestic violence, where system involvement heightened rather than reduced their risk. At the time of the study, ACS had expanded from two to five PRMs, including models that prioritize families for housing access and one that identifies families unlikely to be regulated. These developments raise ethical concerns about the expanding reach of algorithmic governance.
Conclusions and Implications:
This study demonstrates that PRMs in child welfare are not standalone tools. They are embedded in complex institutional ecologies that reproduce racialized harm. Predictive tools quantify suffering and embed prior surveillance into present-day risk calculations. These findings call for social workers to critically examine their role in algorithmic decision-making and advocate for information justice, including transparent disclosures about how family data is used. Future research should explore how PRMs influence frontline practice and how workers interpret algorithmic outputs. Policies must also address the coercive entanglement between services, data, and family regulation, particularly in areas such as mental health and housing access.
![[ Visit Client Website ]](images/banner.gif)