Methods: A total of 102 participants (32 clinicians and 70 clinicians-in-training) completed two simulated sessions (i.e., both the cognitive and behavioral simulations for a single randomly selected diagnosis) and filled out an experience questionnaire. The experience questionnaire included quantitative and qualitative open-ended questions to explore acceptability and usability of the simulation and evaluation tool. Prior to using the simulations, 18 participants also submitted audio recordings of sessions with their own clients. All transcripts of simulated and real sessions were transcribed and coded using the ACCS by human coders.
Results: Reliability of the human-coded scores for the 22 items of the social anxiety disorder exposure simulation (n=32) was high (Cronbach’s α=0.98), exceeding that found by Muse et al. (2016). Score differences comparing simulation to human-coded scores had a mean of 0.85 scoring metric points (median=0.85; SD=0.16). Our standard error of measurement (SEM) of 2.3 (95% CI=4.6) was largely in line with 3 of the 4 Muse SEM estimates. The intra-class agreement correlation (ICC-consistency, accounting for random error) between human-coded tape sessions vs. human-coded simulation sessions across anxiety disorders and treatment focus was moderate (ICC=0.51). Participants agreed on a 5-point Likert scale that the cognitive restructuring (M = 3.91, SD = 0.79) and exposure (M = 3.95, SD = 0.83) simulation sessions aligned with their real-life treatment sessions. The majority of participants stated that they would recommend the simulation tool to other trainees (63.7%, M = 3.62, SD= 1.03) and that it improved their confidence in working with anxiety disorders (68.3%, M = 3.57, SD = 1.04). Themes of open-ended qualitative questions included satisfaction of simulated session realism, interactivity in the tool and platform, variety within the training experience, and session structure.
Implications and Conclusions: Findings overall demonstrate the high internal consistency and acceptability of the evaluation tool with participant agreement that simulations align with their real-life practice experiences. Given providers and clinical trainees may not have enough face-to-face treatment opportunities with clients for CBT skill acquisition, and particularly so in the COVID-19 context, computerized simulations with assessment offer an accessible opportunity to practice skills and receive real-time feedback. Our future research will examine the effectiveness and implementation of this simulation tool across differing anxiety disorders and various user (i.e., provider, trainee) skill levels.