Abstract: Evaluating Learnings in a Computer-Simulated Clinical Training Environment: A Multi-Method Case Study (Society for Social Work and Research 26th Annual Conference - Social Work Science for Racial, Social, and Political Justice)

507P Evaluating Learnings in a Computer-Simulated Clinical Training Environment: A Multi-Method Case Study

Schedule:
Saturday, January 15, 2022
Marquis BR Salon 6, ML 2 (Marriott Marquis Washington, DC)
* noted as presenting author
Kang Sun, PhD, Pre-doc Fellow, University of Illinois at Urbana-Champaign, Champaign-Urbana, IL
Valerie Cintron, MSW, Teaching Assistant Professor, University of Illinois at Urbana-Champaign, Champaign, IL
Background and Purpose: Computer simulation as a teaching strategy has increased use in training social work students' clinical practice skills. However, such increase has yet to be fully incorporated into the current pedagogical studies. One important step towards such incorporation is to evaluate comprehensively students' learning experiences within a computer-simulated environment.

This study evaluated the computer simulation-mediated learning experiences through a pre-and-post-test design. The surveys were further substantiated with the program-generated evaluation metrics, records of simulated sessions, and a six-page reflective essay assignment from each participant. These multiple methods were used to triangulate objective data with subjective data to evaluate students’ learning experiences.

Methods: The project was introduced to a graduate-level clinical training class on cognitive and behavioral techniques at the first-class meeting. Six students participated in this study. I used a pre-and post-test design to compare students' overall expectations with their simulation technology experiences, objective data of students' learning behaviors provided by the computer simulation program, and subjective data from students' reflection papers. The computer simulation records and students' reflections were coded using sentence-by-sentence open coding, axial coding, and then selective coding. The emergent themes yielded from different methods were compared for either consolidation or revision of themes. All research procedures were approved by the Human Subject Research Board.

Findings: All students expressed discrepancies between their expectations and their experiences of the simulated session. They report multiple attempts to practice CB methods, motivated by both scores and their self-identified attitudes to learning. Students used their scores as a feedback mechanism to adjust their analysis and decisions in choosing assessment and intervention techniques. Simultaneously, they critiqued the limited response options and language choices provided by the computer simulation. Most students linked their past life or career experiences with the simulation experiences in adjusting their assessment, intervention, and language choice. However, such choices are adjusted in their subsequent trials of the session.

Conclusion and Implications: Students' learning experiences show both the effectiveness and limitations of using the computer simulation for beginning CB methods training. The effectiveness of the simulated learning environment is manifested through students' multiple attempts to practice CB methods and their creative use of scores to form feedback loops. All students adjusted their specific choices in the process, and some developed a high-level understanding of what CB methods are. Although the simulation program's limited choices prevent students from developing personal styles in conducting the session, the simulation program structured students' responses to include the CB application's essential parts. However, the program lacked options for follow-up questions and providing more complex circumstances simulating real-life sessions. The simulation reflections show that students have critical appraisals on both benefits and limitations of the program. The critiques on the program's limits, such as lack of complex circumstances, limited response choices, and limited follow-up question options, suggest the limitations of the program can be conditioned through students’ critical thinking practices.