Abstract: Exploring How Gpt-Based Chatbots Can Grow Students' Critical Thinking, Enhance Clinical Case Conceptualization, and Increase Awareness of Ethical Concerns/Biases (Society for Social Work and Research 29th Annual Conference)

Please note schedule is subject to change. All in-person and virtual presentations are in Pacific Time Zone (PST).

403P Exploring How Gpt-Based Chatbots Can Grow Students' Critical Thinking, Enhance Clinical Case Conceptualization, and Increase Awareness of Ethical Concerns/Biases

Schedule:
Friday, January 17, 2025
Grand Ballroom C, Level 2 (Sheraton Grand Seattle)
* noted as presenting author
Tuyet Mai Hoang, PhD, Assistant Professor, University of Illinois at Urbana-Champaign, Urbana, IL
Yali Feng, PhD, Associate Professor, University of Illinois at Urbana-Champaign, Urbana, IL
Ainslee Wong, BS, Student, University of Illinois at Urbana-Champaign, Urbana, IL
Tori Ferrara, Student, University of Illinois at Urbana-Champaign, Urbana, IL
Background and Purpose: The advancement of generative artificial intelligence (AI) technology opens doors to innovative approaches in various fields, including social work and mental health. This project focuses on exploring how GPT-based chatbots can help students engage in critical thinking and reflection as they undertake mental health clinical case studies and determining mental health diagnoses. Our research questions are: (1) How can GPT application be used as an educational tool for our social work students to grow in the current digital working landscape? and What are students’ areas of growth and challenges when engaging with GPT-based chatbot? The current study’s goal is to explore how GPT-based chatbots can help grow students’ critical thinking, enhance clinical case conceptualization, and increase awareness of ethical concerns/biases.

Methods: A 60-minute pilot exercise was designed for students to explore the applications of a chatbot in mental health diagnosis. The in-class exercise consisted of providing ChatGPT-3.5 with a clinical case without racial identifiers, and then prompting it for a diagnosis with our guided steps. The same clinical case with racial identifiers added in was then tested to see potential differences in chatbot response. Students were asked to provide verbal feedback and immediate responses during the exercise. Afterwards, students also wrote a reaction paper to the exercise. The sample included 19 students, who were enrolled in a master-level mental health disorder course in the School of Social Work at a large Midwestern public university.

Results: Major themes showed that students have strong caution of AI biases and ethical considerations, even though they don’t have much knowledge and little experience with ChatGPT; This prevents them from deeper critical thinking on case conceptualization and building comfortable confidence and certainty in GPT based chatbot responses. While this lack of certainty stimulated some promoted critical thinking at the information literacy level, especially evaluation information sources, it prevented students from engaging in deeper critical thinking of the actual case. However, we found that the awareness of reflecting on user-AI relationship can effectively raise by guidance. Most of the students were able to use reflecting on their relationship with AI to know more about themselves, AI, and interaction, which might contribute to future AI use more effective and comfortable.

Conclusion: The pilot exercise reveals the importance of accurately gauging students’ understanding of GPT-based chatbots and the implications this can have on the design of educational exercises. The learning of future GPT-based chatbots should not only be conceptualized within the classroom because our students also need to be trained for their future engagement in professional career. Lastly, our study underscored the need to develop a customized chatbot to eliminate biased information and provide references, which would ensure information resource credibility. This project also provides exploratory implications on (1) how we train our social work students in the mental health field given that technology inevitably impacts future clinical diagnosis and delivery of services, and (2) how to support students work productively with generative AI technology for growth versus substitution for their work.