Session: Using Human-Centered AI to Detect Racial Bias and Adapt Teacher Student Models for Equitable Decision-Making in Social Organizations (Society for Social Work and Research 30th Annual Conference Anniversary)

342 Using Human-Centered AI to Detect Racial Bias and Adapt Teacher Student Models for Equitable Decision-Making in Social Organizations

Schedule:
Sunday, January 18, 2026: 11:30 AM-1:00 PM
Liberty BR O, ML 4 (Marriott Marquis Washington DC)
Cluster: Race and Ethnicity
Organizer:
Estefanía Palacios, MSW, Boston College
Speakers/Presenters:
Estefanía Palacios, MSW, Boston College and Diego Pavez, BA, BCI Bank
Social service organizations in the AI era may consider using artificial intelligence systems to allocate resources to their target populations; however, there is a risk that these automated decisions may replicate or even amplify existing racial biases, leading to inequitable outcomes. This workshop is designed as a practical training session for social work professionals to acquire methodological skills that enable them to detect, analyze, and mitigate such biases by applying the principles of Human Centered AI. In this workshop, an innovative methodology based on the use of standardized prompts will be presented; these prompts describe fictional applicant profiles for social benefit programs, such as food assistance, community health services, and job training, in which key demographic variables (age, gender, income, education level, and location) remain constant while only the race/ethnicity variable is systematically varied. This controlled variation will allow participants to compare the AI systems decisions when the applicants self-identified race is altered in isolation. To contextualize and validate the findings, current demographic data from the Public Use Microdata Sample (PUMS) 2023 will be used, facilitating a comparison between the automated decisions and the actual population distribution. During the workshop, participants will first learn how to identify and quantify racial biases by analyzing the differences in responses produced by various high-usability AI models, both open source and proprietary, from different geographic regions. For instance, they will evaluate the eligibility of a hypothetical applicant who meets objective vulnerability criteria, with only the race variable modified, to determine whether the model delivers divergent outcomes based solely on this difference. Next, the workshop will address the critical examination of pre-trained ("teacher") models versus the possibility of adapting or training context-specific ("student") models using local data; it will be emphasized that teacher models, having been developed from large, general datasets, may incorporate inherent biases, whereas student models, developed with context-specific information, better reflect the unique realities and needs of local communities. Practical case studies and strategies will be provided to help organizations learn how to develop and supervise these adapted models. Finally, the workshop will emphasize the importance of applying Human Centered AI principles, ensuring that technology is used ethically, transparently, and inclusively. Guidelines and a checklist of best practices will be shared so that social work practitioners can incorporate these principles into the continuous review and adjustment of AI systems, ensuring that automated decisions align with values of social justice and human dignity. As a result, participants will leave the workshop with practical tools to monitor, evaluate, and correct potential biases in resource allocation, promoting equitable decision-making in their organizations and advancing the practice of social work in an era of AI.
See more of: Workshops