Abstract: Construct Validity Invariance and Discrepancies in Dichotomous Effect Sizes across Different Measures (Research that Promotes Sustainability and (re)Builds Strengths (January 15 - 18, 2009))

10634 Construct Validity Invariance and Discrepancies in Dichotomous Effect Sizes across Different Measures

Schedule:
Saturday, January 17, 2009: 4:30 PM
Mardi Gras Ballroom C (New Orleans Marriott)
* noted as presenting author
William R. Nugent, PhD , University of Tennessee, Knoxville, Professor, Knoxville, TN
Meta-analysis is becoming an important tool for identifying and justifying evidence based practices. A fundamental concept in meta-analysis is that of the effect size, and a basic presumption is that effect sizes based upon different measures have been placed onto a common metric and are thereby rendered directly comparable. Recent theoretical work has shown that effect sizes based upon different measures are directly comparable only when two invariance conditions hold: construct validity invariance and reliability invariance (Nugent, 2006). Especially important is construct validity invariance. Recent research has shown that effect sizes for the same “effect” can vary substantially across measures which fail to meet construct validity invariance. For example, population construct level SMD effect sizes for the exact same population comparison but based upon different measures which fail to meet construct validity invariance can range from .40 to .86. This range covers about 1.6 standard deviations in a distribution of SMD effect sizes reported by Lipsey & Wilson (2001) in a meta-analysis of 300 studies of psychological and educational interventions. Recent research has further suggested that the discrepancies in population construct level correlation effect sizes across measures failing to meet construct validity invariance may be even greater.

This paper presents results from a simulation study of the discrepancies introduced into dichotomous outcome effect sizes, for a given “effect” but based upon different measures, when construct validity invariance fails to hold. This research is the first to be done on the variability in dichotomous outcome effect sizes as a function of measures failing to meet construct validity invariance. In this simulation two populations of scores were simulated that were identical except in terms of the proportions of persons experiencing a dichotomous outcome, such as recidivism or recovery from some illness. A number of measures hypothetically used to make inferences about the occurrence or non-occurrence of the outcome of interest were also simulated, some of which met construct validity invariance and some which failed – in different ways and to different degrees - to meet this invariance condition. Several single and multiple population dichotomous outcome effect sizes were then computed based upon the different measures, including the odds; the log-odds; the odds ratio; and the relative risk.

Results showed that there was substantial variability across these effect sizes, which represented the same “effect,” as a function of measurement procedure when construct validity invariance did not hold. The discrepancies in the single population effect sizes, such as the odds, were especially pronounced when the construct validity invariance condition was violated. The implications of these results for variability in the outcomes of a meta-analysis as a function of measurement procedure are considered, as are the implications for the use of meta-analysis as a tool for identifying and justifying evidence based practices.

References

Lipsey, M., & Wilson, D. (2001). Practical Meta-Analysis. Newbury Park, CA: Sage.

Nugent, W. (2006). The comparability of the standardized mean difference effect szie across different measures of the same construct: Measurement considerations. Educational and Psychological Measurement, 66(4), 612 - 623.