Abstract: AI-Powered Documentation in Mental Health: Opportunities, Challenges, and Ethical Considerations (Society for Social Work and Research 30th Annual Conference Anniversary)

AI-Powered Documentation in Mental Health: Opportunities, Challenges, and Ethical Considerations

Schedule:
Friday, January 16, 2026
Liberty BR I, ML 4 (Marriott Marquis Washington DC)
* noted as presenting author
Dania Lerhman, Doctoral Student, Fordham University, NY
Elizabeth Matthews, PhD, Associate Professor, Fordham University, New York, NY
Lauri Goldkind, PhD, Professor, Fordham University, New York, NY
Background: The use of artificial intelligence in behavioral health practice is rapidly proliferating. AI generated progress notes, which use large language models (LLMS) to summarize clinical sessions in near real-time, is an emerging but understudied application of this new technology. While some research has found that AI is capable of generating high-quality, accurate clinical notes, ambivalent attitudes towards technology and low AI literacy among mental health clinicians will influence how these tools are integrated into practice. Thus far, no work has explored the real-world implementation of AI generated notes among practicing therapists. This qualitative study examined the experience of 17 mental health clinicians using one AI generated note platform in order to identify perceived benefits, challenges, and limitations of integrating these tools into routine care.

Methods: Semi-structured interviews were conducted with 17 licensed therapists who had experience using an AI clinical note platform. Interviews were completed on Zoom and transcribed for analysis. A modified Grounded Theory approach was used to identify key themes across respondents.

Results: Respondents described several ways in which AI notes improved the quality of care. First, clinicians reported significant reductions in documentation burden. Allowing AI to write their clinical notes allowed them to remain more present and engaged during their sessions, while also reducing job related stress associated with administrative work. Second, AI-generated notes were often perceived as more clear, concise, and professional than those written manually, bolstering confidence in their clinical documentation and clinical interventions. Third, the clinical interpretations and suggestions generated by the platform were often enlightening; respondents likened AI feedback to that of a supervisor, offering clinicians a unique opportunity to see their therapeutic interventions from a different perspective, evaluate their clinical decision-making, and identify potential blind spots in their treatment approach. Despite these benefits, participants expressed significant concerns about client confidentiality, data security, and the future impacts of AI on the field. The lack of well articulated policies and best practices guiding AI use was a notable barrier to more robust adoption of these tools. Technological shortcomings, such as voice misidentification and potential biases in documentation, were also noted.

Conclusions: Respondents generally found AI notes to be accurate and complete. Managing the pressure of clinical documentation requirements was a significant challenge for respondents in this sample, indicating that the availability of AI notetaking platforms – particularly those that are high quality- have the potential to meet a pressing need among practitioners. Respondents raised important questions about how to define safe and responsible AI use and how to ensure that mental health focused technological innovations are appropriately regulated, noting that guidance from policy and the profession lags the rapid pace of AI adoption. Best practices that will protect both clinicians and clients against potential harm, including data breaches and algorithmic bias and misinformation, are urgently needed.