Research That Matters (January 17 - 20, 2008)


Council Room (Omni Shoreham)

Evidence-Based or Biased? the Quality of Published Reviews of Evidence-Based Practices

Julia Littell, PhD, Bryn Mawr College.

Background: There is a growing body of empirical evidence that shows that methods of research synthesis matter. The methods used to identify, analyze, and synthesize results of research on intervention effects can affect conclusions of a systematic review and meta-analysis. However, it is unclear whether central issues in research synthesis methodology are adequately considered in published reviews of research on "evidence-based" programs.

Purpose: Two studies were conducted to assess the methods used in published reviews of results of research on effects of a prominent evidence-based program. The purpose was to describe methods used to identify, analyze, and synthesize results of empirical research on intervention effects, and determine whether published reviews are vulnerable to various sources and types of bias.

Methods: Study 1 extracted and analyzed information on the methods, sources, and conclusions of 37 published reviews of research on effects of one model program. Study 2 compared results of one published randomized controlled trial with summaries of results of that trial that appeared in 13 published reviews. Both studies used descriptive statistics and content analysis.

Results: Study 1: Published reviews varied in terms of the transparency of inclusion criteria, strategies for locating relevant published and unpublished data, standards used to evaluate evidence, and methods used to synthesize results across studies. Most reviews relied solely on narrative analysis of a convenience sample of published studies. None of the reviews used systematic methods to identify, analyze, and synthesize results. Study 2: When results of a single study were traced from the original report to summaries in published reviews, three patterns emerged: a complex set of results was simplified, non-significant results were ignored, and positive results were over-emphasized. Most reviews used a single positive statement to characterize results of a study that were decidedly mixed. This suggests that reviews were influenced by confirmation bias, the tendency to emphasize evidence that supports a hypothesis and ignore evidence to the contrary.

Conclusions and implications: Published reviews may be vulnerable to biases that scientific methods of research synthesis were designed to address. This raises important questions about the validity of traditional sources of knowledge about “what works,” and suggests need for a renewed commitment to using scientific methods to produce valid evidence for practice.