Session: Utility of Benchmarking to Evaluate the Implementation of Research Supported Interventions (Society for Social Work and Research 23rd Annual Conference - Ending Gender Based, Family and Community Violence)

67 Utility of Benchmarking to Evaluate the Implementation of Research Supported Interventions

Schedule:
Friday, January 18, 2019: 8:00 AM-9:30 AM
Union Square 23/24 Tower 3, 4th Floor (Hilton San Francisco)
Cluster: Research Design and Measurement (RD&M)
Speakers/Presenters:
Allen Rubin, PhD, University of Houston, Bruce Thyer, PhD, Florida State University, Joanne Yaffe, PhD, University of Utah and Danielle Parrish, PhD, Baylor University
A recent article in Social Work promoted benchmarking as a practitioner friendly way to evaluate practice empirically (Rubin & von Sternberg, 2017). The rationale for this approach recognizes the difficulty of implementing control groups in evaluations conducted in community-based, non-research practice settings. When a control group is not feasible in such settings, this approach allows the evaluator to compare the magnitude of change from pretest to posttest (a within-group effect size) for an empirically supported intervention to that of an established benchmark for that intervention based on meta-analytical techniques. The studies providing the benchmarks are not pure meta-analyses because they do not ask whether the intervention in question is effective. Instead, they seek to show how much pre-to-post change occurred within groups in the RCTs that already supported the efficacy of the intervention. The rationale for the benchmarking approach also recognizes that research-supported interventions often are not implemented effectively in non-research service-oriented settings, regardless of the extent and strength of their RCT research support. This gap has been attributed to the difference between the relatively ideal treatment conditions in RCTs and the more problematic treatment conditions in service-oriented settings regarding limitations in training, supervision, and caseload size and diversity. Consequently, decision makers and service providers in the latter settings cannot simply assume that an intervention that they are providing is being provided effectively just because RCTs have supported its efficacy. It follows that they should evaluate how well they are implementing it. Benchmarking offers one potential measure of high quality implementation, and benchmarks can be developed for any intervention or program that has been found to be efficacious over multiple studies. In the past few years three benchmarking studies have been published in social work journals, and a fourth is in press, providing such benchmarks for four different practice contexts and interventions. However, some reviewers of these studies have expressed considerable skepticism about the feasibility of calculating within-group effect sizes in service-oriented settings that have limited statistical resources. Doubt has also been expressed about the fact that the benchmarking studies have not been pure meta-analyses.

This roundtable session will begin with a description of the rationale and methodology of the benchmarking approach by one presenter, followed by a criticism of it by a second presenter, and then a reaction from a neutral third presenter who has done work with the Campbell and Cochrane Collaborations. Next, the first two presenters will respond to the third presentation. Audience participation in the discussion will then be encouraged and moderated by a fourth presenter. Our goal is to generate awareness of the benchmarking procedure, discuss its pros and cons, and stimulate its future use by social work researchers and practitioners who agree with its value and seek a way to strengthen implementation evaluations that they conduct in settings where control groups are not feasible.

See more of: Roundtables