Abstract: Evaluating Performance and Reflection through Objective Structured Clinical Examinations for Social Work (Society for Social Work and Research 14th Annual Conference: Social Work Research: A WORLD OF POSSIBILITIES)

11545 Evaluating Performance and Reflection through Objective Structured Clinical Examinations for Social Work

Schedule:
Saturday, January 16, 2010: 4:30 PM
Bayview B (Hyatt Regency)
* noted as presenting author
Marion Bogo, MSW, AdvDipl SW , University of Toronto, Professor, Toronto, ON, Canada
Cheryl Regehr, PhD , University of Toronto, Professor, Toronto, ON, Canada
Carmen Logie, MSW , University of Toronto, Doctoral student, Toronto, ON, Canada
Ellen Katz, MSW , University of Toronto, Doctoral student, Toronto, ON, Canada
Maria Mylopoulos, PhD , Hospital for Sick Children, Educational Researcher, Scientist, Toronto, ON, Canada
Glenn Regehr, PhD , University of Toronto, Professor, Toronto, ON, Canada
Purpose: Social work educators need standardized, valid, and reliable outcome assessments of students' practice competence. This study aimed to establish the reliability and construct validity of an Objective Structured Clinical Examination (OSCE), already used in other health professions and adapted for social work.

Methods: In an OSCE, performance is assessed in a series of “stations” during which students interact with a standardized actor/client trained to enact a clinical situation typical for the profession. Students receive orienting information about each scenario then conduct a 15 minute interview while a faculty member or field instructor independently observes and evaluates their performance. The social work OSCE was adapted to include a 15 minute post interview reflective dialogue with the examiner, who then rates how students conceptualize, reflect on, and assess their own practice.

Based on earlier work (Authors 2004; 2006), two rating tools were developed to evaluate candidates. The first tool evaluates 10 dimensions of performance relevant to the interaction with the client. The second tool evaluates 8 dimensions of performance relevant to the subsequent reflective dialogue.

To establish reliability and construct validity, 24 volunteers representing three levels of education and practice were recruited to participate in a 5-station OSCE. Eleven second year MSW students, 7 recent MSW graduates, and 5 practitioners with at least five years experience post MSW were commingled and raters in the stations were blinded to participants' training/experience level.

Results: For both the clinical and reflection rating tools, the internal consistency was high (mean Cronbach's alpha across the five stations of 0.92 and 0.93 respectively), suggesting that the dimensions within each rating tool were assessing a common ability. The correlation between the two rating tools at a given station was somewhat lower (average of 0.68 across the five stations), suggesting that the two aspects of performance (clinical and reflection) were related but different.

The reliability of each rating tool across stations was moderate, with a 5 station Cronbach's alpha of 0.55 for the clinical rating tool and 0.48 for the reflection rating tool. This suggests some evidence for the generalizability of individual scores across the range of scenarios tested, but does imply that a number of stations are required to obtain a generalizable evaluation of a student's overall ability.

Comparing the performance of candidates at the three experience levels, analysis of variance revealed a significant effect of experience for the clinical ratings (p<.05) and a marginally significant effect of experience for the reflection ratings (p=.06). Post hoc analysis revealed no differences between current students and recent graduates, but significantly higher scores for the experienced practitioners compared to the other two groups on both measures, suggesting some evidence of construct validity.

Implications: Early results are promising in that the OSCE is feasible and seems to effectively measure both practice and reflective skills in participants. Further research assessing the reliability of the examination with more targeted populations is necessary, but this work shows possibility for using the OSCE to supplement field evaluations and evaluate multiple aspects of competence in social work.