Methods: A 60-minute pilot exercise was designed for students to explore the applications of a chatbot in mental health diagnosis. The in-class exercise consisted of providing ChatGPT-3.5 with a clinical case without racial identifiers, and then prompting it for a diagnosis with our guided steps. The same clinical case with racial identifiers added in was then tested to see potential differences in chatbot response. Students were asked to provide verbal feedback and immediate responses during the exercise. Afterwards, students also wrote a reaction paper to the exercise. The sample included 19 students, who were enrolled in a master-level mental health disorder course in the School of Social Work at a large Midwestern public university.
Results: Major themes showed that students have strong caution of AI biases and ethical considerations, even though they don’t have much knowledge and little experience with ChatGPT; This prevents them from deeper critical thinking on case conceptualization and building comfortable confidence and certainty in GPT based chatbot responses. While this lack of certainty stimulated some promoted critical thinking at the information literacy level, especially evaluation information sources, it prevented students from engaging in deeper critical thinking of the actual case. However, we found that the awareness of reflecting on user-AI relationship can effectively raise by guidance. Most of the students were able to use reflecting on their relationship with AI to know more about themselves, AI, and interaction, which might contribute to future AI use more effective and comfortable.
Conclusion: The pilot exercise reveals the importance of accurately gauging students’ understanding of GPT-based chatbots and the implications this can have on the design of educational exercises. The learning of future GPT-based chatbots should not only be conceptualized within the classroom because our students also need to be trained for their future engagement in professional career. Lastly, our study underscored the need to develop a customized chatbot to eliminate biased information and provide references, which would ensure information resource credibility. This project also provides exploratory implications on (1) how we train our social work students in the mental health field given that technology inevitably impacts future clinical diagnosis and delivery of services, and (2) how to support students work productively with generative AI technology for growth versus substitution for their work.