Methods: Semi-structured interviews were conducted with 17 licensed therapists who had experience using an AI clinical note platform. Interviews were completed on Zoom and transcribed for analysis. A modified Grounded Theory approach was used to identify key themes across respondents.
Results: Respondents described several ways in which AI notes improved the quality of care. First, clinicians reported significant reductions in documentation burden. Allowing AI to write their clinical notes allowed them to remain more present and engaged during their sessions, while also reducing job related stress associated with administrative work. Second, AI-generated notes were often perceived as more clear, concise, and professional than those written manually, bolstering confidence in their clinical documentation and clinical interventions. Third, the clinical interpretations and suggestions generated by the platform were often enlightening; respondents likened AI feedback to that of a supervisor, offering clinicians a unique opportunity to see their therapeutic interventions from a different perspective, evaluate their clinical decision-making, and identify potential blind spots in their treatment approach. Despite these benefits, participants expressed significant concerns about client confidentiality, data security, and the future impacts of AI on the field. The lack of well articulated policies and best practices guiding AI use was a notable barrier to more robust adoption of these tools. Technological shortcomings, such as voice misidentification and potential biases in documentation, were also noted.
Conclusions: Respondents generally found AI notes to be accurate and complete. Managing the pressure of clinical documentation requirements was a significant challenge for respondents in this sample, indicating that the availability of AI notetaking platforms – particularly those that are high quality- have the potential to meet a pressing need among practitioners. Respondents raised important questions about how to define safe and responsible AI use and how to ensure that mental health focused technological innovations are appropriately regulated, noting that guidance from policy and the profession lags the rapid pace of AI adoption. Best practices that will protect both clinicians and clients against potential harm, including data breaches and algorithmic bias and misinformation, are urgently needed.
![[ Visit Client Website ]](images/banner.gif)