Motivational Interviewing (MI) is an established, evidence-based practice in child welfare settings (U.S. Department of Health and Human Services, n.d.), yet it remains difficult to build sustainable skills to fidelity due to the resource-intensity of training and supervision (Davis et al., 2021). Traditional methods often lack consistent, immediate, individualized feedback, making it challenging to build MI proficiency. Moreover, evaluating fidelity to MI practices through expert review is time-consuming and costly, placing additional strain on practitioners and potentially diverting resources from client care.
The Virtual Motivational Interviewing (VMI) app presents a promising alternative. VMI is a skills training app that incorporates deliberate practice, that is repeated, scaffolded practice with feedback, for learning MI (McDonald et al., 2021). Initial research on VMI has shown promise as a viable app that can increase skill and self-efficacy related to MI, offering a potentially more accessible and scalable solution (Benson et al, 2025).
With advancements in artificial intelligence (AI), chatbots are emerging as tools that can simulate client interactions with the potential of delivering real-time feedback, including fidelity assessments such as the Motivational Interviewing Treatment Integrity (MITI) framework. VMI aims to integrate a realistic AI chatbot in order to provide advanced practice opportunities in a safe environment, while also providing immediate feedback related to both overall skill improvement as well as fidelity to MI using evidence-based frameworks such as the MITI.
Methods
This study aimed to evaluate both the believability, or realistic nature, of VMI’s AI chatbot and its accuracy in providing MITI-based scoring and feedback compared to expert human raters. A total of 100 participants—including students, professors, practitioners, clinicians, and social workers—were recruited through a webinar and voluntary outreach. Using a mixed-methods design, participants’ transcripts of interactions with the chatbot were analyzed by both the chatbot and MITI expert reviewers to compare interrater reliability in regard to MITI scoring and feedback. Participants also completed surveys measuring believability and co-presence of the chatbot.
Results
Initial results revealed some discrepancies between the chatbot’s MITI scoring and human expert evaluations. Based on these findings, the AI chatbot’s prompting algorithms were revised and then reassessed After revisions, the feedback and scoring provided by the chatbot and human expert showed an interrater reliability rating indicative of a “high” degree of agreement. Additionally, participants rated VMI’s chatbot as “believable” to “very believable” terms of believability, indicating that the interaction felt authentic, realistic, and engaging.
Conclusion and Implications
The findings suggest that AI-driven chatbots can serve as effective, believable tools for practicing MI in realistic scenarios while offering reliable feedback consistent with fidelity assessments such as the MITI. This approach has the potential to significantly reduce the burden of traditional training methods by lowering costs and time demands, while increasing access to quality skill-building opportunities. VMI, equipped with the AI chatbot, represents a scalable, resource-efficient solution for MI training and supervision, with strong potential for broader application. VMI can be leveraged as a low-barrier tool for both on-going training as well as longitudinal fidelity assessment of MI skills.
![[ Visit Client Website ]](images/banner.gif)