Abstract: AI in Supporting LGBTQ+ Communities (Society for Social Work and Research 30th Annual Conference Anniversary)

AI in Supporting LGBTQ+ Communities

Schedule:
Saturday, January 17, 2026
Marquis BR 8, ML 2 (Marriott Marquis Washington DC)
* noted as presenting author
Dget Downey, MSW, PhD Student, New York University, NY
Background and Purpose: As artificial intelligence (AI) becomes increasingly integrated into social work, concerns have emerged about its potential to replicate and exacerbate existing inequities. For LGBTQ+ communities, particularly those facing intersecting forms of marginalization such as racism, ableism, or classism, AI systems often reinforce bias, erase identities, or misclassify data in harmful ways. This review examines how AI can be designed and deployed to ethically support LGBTQ+ individuals. Grounded in social work, queer/trans theory, and data justice, this review argues for an approach to AI development that centers equity, participatory design, and lived experience as essential factors in mitigating harm and advancing fairness in digital and mental health spaces. It also positions LGBTQ+ communities not as passive recipients but as essential co-creators of technology systems that impact their lives.

Methods: This conceptual review synthesizes findings from interdisciplinary research across computational social science, digital ethics, and LGBTQ+ studies. Through an integrative literature review and case analysis of recent applications, including AI-driven mental health tools, content moderation systems, educational platforms, and advocacy technologies, the review identifies structural patterns of exclusion and proposes actionable strategies for inclusive AI development. It draws on empirical findings from peer-reviewed studies, participatory design toolkits, and community-led technology initiatives to outline a justice-oriented framework. The review also integrates social work’s historical and ethical commitments to marginalized populations as a guiding lens for evaluating AI.

Results: Findings reveal that many AI systems fail to reflect the complexities of LGBTQ+ lives due to underrepresentation in training data, lack of inclusive design practices, and insufficient attention to power dynamics. For instance, chatbots and mental health apps often misgender users or offer biased advice, while content moderation algorithms flag queer language as inappropriate. Educational tools may reinforce cisnormative assumptions, and predictive safety planning technologies risk over-surveillance of already marginalized communities. However, community-informed interventions, such as multilingual, culturally adaptive AI tools and participatory frameworks, demonstrate promising outcomes when LGBTQ+ stakeholders are included throughout development. Key recommendations include embedding intersectional ethics at the outset, ensuring data privacy through community governance, and evaluating AI systems using metrics defined by LGBTQ+ users themselves.

Conclusions and Implications: Ethical AI development for LGBTQ+ populations is not merely a technical challenge, it is a social and political imperative. Social work offers a powerful foundation for designing systems that prioritize autonomy, dignity, and safety. This review calls for interdisciplinary collaboration between technologists, researchers, and LGBTQ+ communities to co-create tools that are structurally accountable and culturally responsive. The implications extend to practice, policy, and education. Social workers must advocate for anti-oppressive technology use, educators must integrate AI literacy and ethics into training, and AI developers must treat LGBTQ+ individuals not as edge cases but as core collaborators. Through intentional, justice-driven innovation, AI can meaningfully support LGBTQ+ well-being and help envision a technological future rooted in equity, safety, and liberation.