Abstract: Professional Competency Assessment Methods and Measures: A Systematic Review (Society for Social Work and Research 21st Annual Conference - Ensure Healthy Development for all Youth)

Professional Competency Assessment Methods and Measures: A Systematic Review

Schedule:
Friday, January 13, 2017: 4:10 PM
Preservation Hall Studio 10 (New Orleans Marriott)
* noted as presenting author
James W. Drisko, PhD, Professor, Smith College, Northampton, MA
Background/Purpose:  Accredited social work programs must demonstrate that their students have mastered a broad set of competencies.  CSWE’s EPAS (2015) requires two summative measures of nine competencies, with at least one measure based on real or simulated practice. In practice, social work programs use a wide variety of measures to assess competency. There is, however, very little empirical research supporting CSWE’s assessment approach, or examining the validity and reliability of most measures.  Yet improving the quality of competency assessment is vital to improving the profession’s educational outcomes and to protecting the public we serve.  This study examined the kinds of measures of competence reported in the literatures of social work, psychology, medicine and nursing since 1995, and the empirical support for the validity and reliability of these measures of professional competence. 

Methods:  A systemic review of the competency assessment literature was completed.  The terms “competency assessment,” and also “+measures,” +approaches” were searched in four databases.  Pub Med revealed 6,167 publications on competency assessment; nursing’s CINAHL revealed 1,044; PsychInfo 1,552; and Social Work Abstract 19, or 8,782 total publications. Including references cited in these publications, 42 relevant studies of competency measures were located.   

Results:   Neither a clear, empirically-based approach to competency assessment, nor many specific valid and reliable measures, is apparent in literatures of social work and closely allied professions.  Only 2 RCTs and no quasi-experimental studies on professional competency assessment were located:  Training improved mentoring ability versus controls (Pfund et al., 2014) and simulations proved effective as part of practicum training on procedural performance (Watson et al., 2012).  Forty studies and synthetic reviews of the psychometric properties of relevant competency measures were located.  Objective structured clinical examinations have been examined most often across these professions and report good reliability among raters and, in social work, moderate correlations with field evaluations scores (r=.23 on performance and r=.37 on reflection) (Bogo et al., 2012).   Field instructor ratings have face validity, but ratings are often inflated, and field measures generally lack sensitivity (Regher et al., 2007) and may not effectively address all areas of competence.  Standardized measures, such as the SWEAP, have face validity as tests of knowledge, but lack construct and concurrent validation, and do not examine learner performance. Self-assessments can enhance self-efficacy (Holden, 2006) but are better suited as formative than summative/outcome measures (EdCan, 2009).  Portfolios are questioned in terms of both validity and inter-rater reliability (EdCan, 2009).  Holistic measures are few and address limited areas.  Measures of assessor competence are also emerging (Etheridge 2009). 

Implications: Further development and testing of measures for the EPAS social work competencies is needed.  Current measures of the social work competencies are generally face valid, but emphasize knowledge over performance.  The psychometric properties of these measures is generally untested; and where tested was limited to a very specific area of social work.  No approach has demonstrated superiority over others.  Use of multiple, formative and summative measures appears optimal.  Better, empirically validated, competency measures are needed to assess educational outcomes in a meaningful manner.