Measuring Motivational Interviewing Competence and Fidelity for School-Based Applications

Schedule:
Saturday, January 17, 2015: 9:20 AM
Preservation Hall Studio 10, Second Floor (New Orleans Marriott)
* noted as presenting author
Jon Lee, PhD, Assistant Professor, University of Cincinnati, Cincinnati, OH
Andy Frey, PhD, Professor, University of Louisville, Louisville, KY
Jason Small, Scientist, Oregon Research Institute, Eugene, OR
Background and Purpose. Motivational interviewing is a promising and innovative approach for developing or enhancing school-based interventions to increase the fidelity of evidence-based interventions designed to promote academic achievement through changes in teacher or parent behavior.  As specialized instructional support personnel begin learning and utilizing motivational interviewing techniques in school-based settings, there is a growing need for context-specific measures to assess initial MI skill development. Unfortunately, no instruments for school-based settings have been available and, to date, the only systematic attempt to measure the MI related skills of school-based personnel has been by Frey et al. (2013). In this presentation, we will (a) discuss the theoretical distinction between skill acquisition and proficiency, (b) distinguish between training methods that impart initial skill development and those that facilitate long-term mastery, and (c) describe the iterative adaptation and testing of two measures of MI skill acquisition.

Methods. We adapted the Helpful Response Questionnaire (HRQ; Miller, Hedrick, and Orlofsky, 1991) and the Video Assessment of Simulated Encounters (VASE-R; Rosengren et al., 2005). For each, the measure was modified for school settings and made consistent with the most recent conceptualization of MI (i.e., Miller & Rollnick, 2012). We consulted with the original authors of these instruments to inform the adaptation process, and pilot tested both with twelve early childhood consultants working in early childhood programs who participated in a pilot study of an MI training module for school-based personnel. HRQ and VASE-R data were coded by two raters. As preliminary evidence for the reliability of the adapted measures, we examined measures of internal consistency and inter-rater reliability. We examined within-in subject effects in a general linear model framework to assess each measure's sensitivity to training effects.

Results. We examined internal consistency for each rater. Coefficient alpha for the HRQ ranged from .71 to .76 and from .77 to .81 for the VASE-R. Intra-class correlations (ICC) were in the acceptable range for both measures. ICC for the HRQ ranged from .54 to .95. For the HRQ total score, inter-rater reliability was excellent (ICC = .92). ICCs for the VASE-R ranged from .79 to .99. The within-subject partial r effect sizes for HRQ and VASE-R total scores were large (rpart = .92 and .90, respectively).

 Conclusions and Implications. The resulting measures were developed to evaluate the initial MI skill development of school-based personnel. We believe each of these measures has substantial face and content validity and are useful in their current state for program evaluation and professional development of MI practice in school settings. Substantial testing is required to establish the reliability and validity of these measures in the context of basic or applied research.