Friday, 14 January 2005 - 12:00 PM
This presentation is part of: Poster Session I
Assessing the Reliability of Field Instructor Evaluations of Student PerformanceCheryl Regehr, PhD, University of Toronto, Marion Bogo, MSW, University of Toronto, Mike Woodford, MSW, University of Toronto, and Glenn Regehr, PhD, University of Toronto.
Purpose: CSWE Educational Policy Standards require outcome data concerning the practice readiness of the graduates of students in professional social work programs. It is therefore imperative that educators develop effective measures for evaluating student field performance. Presently, controversy exists regarding the reliability and validity of measurement tools currently available. Tools emanating from a Competency Based Education (CBE) perspective, which define skills in clear measurable terms, have good reliability, but questionable validity. Less structured methods of evaluation while perhaps more authentic in assessing readiness for independent practice, are often suspect as to their reliability. This study attempted to determine the consistency in evaluations across experienced field instructors when they were not provided with explicit CBE criteria. We wished to determine the inter-rater reliability of evaluations that relied upon critical reflection and professional expertise.
Method: Through in-depth interviews, 19 experienced field instructors provided a total of 57 profiles of students that they had supervised. Utilizing data abstraction, these profiles were reduced to create 20 cases representative of students of varying competency. Ten experienced instructors who were not involved in the initial qualitative component, were individually asked to read the 20 student cases, generate categories or groupings which reflected differing levels of student performance and provide descriptors which differentiated the resulting groups. Instructors then ranked all 20 cases relative to one another. Inter-rater reliability in the groupings and rankings of vignettes was assessed.
Results: Findings demonstrated remarkable inter-rater reliability between the participating field instructors not only for the rankings of the vignettes, but also for the spontaneous groupings of vignettes generated. In addition, there was considerable similarity in descriptors of the individually created groupings, which were consistently based primarily on the studentsí motivation, relational capacity and integrity and secondarily on concrete skills.
Implications: Even in the absence of explicit CBE criteria for student evaluation, experienced social work field instructors were able to agree on what constituted exemplary performance, on which students were likely to develop into good social work professionals with additional training and supervision, and on which students were clearly unsuitable for practice. This suggests that a more authentic evaluation process, which takes advantage of supervisorís critical reflection and professional expertise, is possible. Some potential evaluation strategies that capitalize on this possibility will be discussed.
See more of Poster Session I