Abstract: An Innovative Online Tool to Evaluate Student Practice Competence in the Field (Society for Social Work and Research 14th Annual Conference: Social Work Research: A WORLD OF POSSIBILITIES)

11555 An Innovative Online Tool to Evaluate Student Practice Competence in the Field

Schedule:
Saturday, January 16, 2010: 5:00 PM
Bayview B (Hyatt Regency)
* noted as presenting author
Cheryl Regehr, PhD , University of Toronto, Professor, Toronto, ON, Canada
Marion Bogo, MSW, AdvDipl SW , University of Toronto, Professor, Toronto, ON, Canada
Glenn Regehr, PhD , University of Toronto, Professor, Toronto, ON, Canada
Barbara Muskat, PhD , Hospital for Sick Children, Academic and Clinical Specialist, Toronto, ON, Canada
Background and Purpose:

Schools of social work rely heavily on field instructor ratings to determine students' competence to practice, but evidence for the reliability and validity of these ratings is weak at best. Concerns include the inability of scales to differentiate between students, as demonstrated by low variability of scores and universally high performance ratings (Authors, 2002; Lager & Robbins, 2004; Raskin, 1994; Wayne, Bogo, & Raskin, 2006). This paper will present research on the development and testing of a new evaluation tool that does not rely on traditional rating scales or field instruction conferences to provide the final evaluation of student performance. Rather a tool was constructed that allows instructors to represent their students' performance in a manner more congruent with the way they describe student competence and to prepare this report without negotiating ratings with the student.

Methods:

A four step, multi-year process was undertaken to develop a new tool for student field performance evaluation. This process involved: 1) Gathering information from experienced field instructors in order to create 20 realistic student vignettes that represented the range of student field performance. 2) Asking other experienced field instructors to independently rank the vignettes and sort them into categories to reflect the various levels of performance. 3) Using grounded theory in examining the original 57 descriptions to determine dimensions of competence. 4) Creating and testing a new Practice Based Evaluation (PBE) Tool consisting of six dimensions of practice competence with five category levels of performance for each dimension.

In the current study, a new online tool was developed incorporating the previous research. For each of the dimensions of the PBE tool, all phrases describing student behavior and competencies from the five levels of performance were placed in alphabetical order without associated numeric values. Field instructors evaluated their current student on each of the six dimensions by selecting terms that best describe the student from a pull down menu. Students used the same tool to evaluate their own performance. The computer then automatically assigns a rating from 1-5 for each dimension based on the original levels assigned to each phrase. The final score for each dimension is the average score of all phrases selected for that dimension. The final overall score was the unweighted average of the scores for the six dimensions.

Results:

120 students and their field instructors participated in a study to examine the new tool. Internal consistency for the total scale was .78 for student generated scores and .82 for field instructor generated scores. Correlation between student and instructor scores was 0.64. The new tool showed a better ability to discriminate between student field performance than a previously used competency-based tool with a significantly lower mean (4.25 vs 4.70, p<.01), a larger variance (0.36 vs 0.32), and a substantially reduced ceiling effect on scores.

Conclusions:

The new tool appears to represent an innovative model for rating students' performance that avoids the problems associated with scales. Findings reveal good internal consistency, inter-rater reliability and discriminant ability.