Practice Evaluation Strategies Among Licensed Clinical Social Workers
Method: Twelve licensed clinical social workers were recruited to participate in a three-hour focus group on the use of practice evaluation strategies in clinical social work practice. Focus group participants consisted of six male and six female social workers (racially and ethically diverse, ages 26 – 65). Primary inclusion criteria was two-year post licensure experience and current, full-time employment in direct social work practice. Participants responded to twenty five questions as represented in twenty years of social work literature and research on practice evaluation strategies among clinical social workers. Focus group results were transcribed and analyzed by three members of the research team using the qualitative software package Atlas.ti. Conceptually clustered matrices were then constructed to detect patterns in themes across focus group participants.
Results: The analyses yielded descriptive information about why the social workers endorsed the informal-interactive tool preference. Prominent reasons for the preference included: (a) it is a good fit with the exigencies of clinical settings, (b) consultations with colleagues offers a more accurate assessment of clients than standardized instruments, (c) trainers in evidence-based algorithms do not seem confident about the algorithms when training experienced clinicians, (d) national research organizations do not keep practice guidelines current, (e) clinical expertise exceeds software and single-subject designs, and (f) client outcomes did not improve after clinicians supplemented informal-interactive tools with formal-analytics tools.
Conclusion and Implications: This study provides complex and nuanced information for social work researchers who are interested in learning why, after nearly a decade of the evidence-based practice movement in social work, the informal-interactive tool preference among social workers remains a stand-alone method for evaluating practice effectiveness.