Abstract: Violations of Construct Validity Invariance and Variability in the Results of a Meta-Analysis (Society for Social Work and Research 14th Annual Conference: Social Work Research: A WORLD OF POSSIBILITIES)

11498 Violations of Construct Validity Invariance and Variability in the Results of a Meta-Analysis

Schedule:
Friday, January 15, 2010: 10:00 AM
Seacliff B (Hyatt Regency)
* noted as presenting author
William R. Nugent, PhD , University of Tennessee, Knoxville, Professor, Knoxville, TN
Meta-analysis is a fundamental tool for identifying and justifying evidence based practices. A fundamental concept in meta-analysis is the “effect size,” and a basic presumption is that effect sizes based upon different measures are comparable. Recent theoretical work has shown that effect sizes based upon different measures are comparable only when construct validity invariance (CVI) holds across the measures (Nugent, 2006). A recent simulation showed that the standardized mean difference (SMD), a commonly used effect size in meta-analyses of treatment outcome research, can vary substantially across measures failing to meet CVI. For example, the population true score SMD for the same between population comparison, but based upon different measures failing to meet CVI, can range from .40 to .86 (Nugent, 2009). These discrepancies in effect sizes across different measures raise the possibility that the outcomes of a meta-analysis may differ significantly when effect sizes are based upon measures violating CVI. However, no research has been done investigating this possibility.

This paper presents results from a simulation study of discrepancies introduced into the results of meta-analyses when effect sizes are based upon measures violating CVI. Seventeen populations of true scores were created, modeling sixteen different studies comparing the effects of different treatments. The population true score SMD for each simulated study was computed based upon several different hypothetical measures, some of which met CVI and others which did not. The variability in results of meta-analytic analyses and comparisons of these effect sizes as a function of the measures violating CVI was investigated.

The results showed that the outcomes of meta-analyses of the effect sizes varied significantly, leading to contradictory conclusions about relative treatment effectiveness, as a function of violations of CVI. For example, in one analysis the relative ordering of effect sizes in terms of which showed more and which showed less effective treatments was completely reversed simply as a function of the measures the effect sizes were based upon when CVI was violated. Other results suggested that correlations between variables representing study characteristics and effect size magnitude, such as might be obtained in a moderator analysis, may change sign as a function of violations of CVI. Thus, for example, a given variable might be identified as being associated with smaller treatment outcomes when the effect sizes are based upon one set of measures, but then as associated with the largest of outcomes when the effect sizes are based upon a different set of measures. Proposed tests of CVI are presented and discussed. The implications of these results for the use of meta-analysis as a method for justifying evidence-based practices are considered.

References

Nugent, W. (2006). The comparability of the standardized mean difference effect size across different measures of the same construct: Measurement considerations. Educational and Psychological Measurement, 66(4), 612 – 623.

Nugent, W. (2009). Construct validity invariance and discrepancies in meta-analytic effect sizes based on different measures: A simulation study. Educational and Psychological Measurement, 69(1), 62 – 78.