Society for Social Work and Research

Sixteenth Annual Conference Research That Makes A Difference: Advancing Practice and Shaping Public Policy
11-15 January 2012 I Grand Hyatt Washington I Washington, DC

58 Correction of Rater Effects In Longitudinal Research with Three Innovative Methods

Friday, January 13, 2012: 10:00 AM-11:45 AM
McPherson Square (Grand Hyatt Washington)
Cluster: Research Design and Measurement
Speaker/Presenter:
Shenyang Guo, PhD, University of North Carolina at Chapel Hill
Problem Statement. This workshop discusses a crucial problem in longitudinal research that is largely ignored in practice, that is, the bias produced by rater effects, the adverse consequences of ignoring rater effects, and robust approaches to its correction. Investigators conducting longitudinal survey research are increasingly using data that have been rated by multiple raters, but the psychometric properties of these data, particularly rater effects on statistical models depicting change, are less explored and seldom treated formally. Many studies using the multiple-rater data assume no or negligible measurement errors in the ratings and apply hierarchical linear modeling (HLM) to the research (Guo & Hussey, 1999). The workshop challenges this assumption, and seeks a statistical approach to disentangle rater effects from the study subjects' true change.

Methods. Guided by the generalizability theory (Cronbach, Gleser, Nanda, & Rajaratnam, 1972), the workshop presents three newly developed methods to correct for rater effects: cross-classified random effects model (CCREM, Raudenbush & Bryk, 2002), local linear regression (lowess, Fox, 2000), and multitrait-multimethod model (MTMM, Bollen & Paxton, 1996).

Findings. (1) A Monte Carlo study simulating nine settings of data creation under two sampling schemes confirms that a 2-level HLM produces biased estimates and misleading significance tests, and should be replaced by CCREM. (2) Using the dataset of the Social and Character Development (SACD) program, a comparison study shows a striking difference between CCREM and HLM on the change pattern between the treated and control groups; that is, if CCREM is used to correct for rater effects, the treated group's change on an outcome is upward (i.e., doing better), rather than downward (i.e., doing worse) that is depicted by HLM. The SACD program was funded by the U.S. Department of Education and the Centers for Disease Control and Prevention, and has cost more than 33 million dollars. Currently, the project did not confirm the SACD intervention was effective (IES, 2010, p. xlvii); however, our reanalysis using the CCREM correction makes us suspect this finding may be partially attributed to the failure to control for rater effects in data analysis.(3) Similar Monte Carlo studies using lowess and MTMM show promising properties of corrective approaches, though future research in these two models is needed.

Significances. Correction of rater effects is crucial in social work research and evaluation. Discussions of the adverse consequences of ignoring rater effects will be of interest and value to many researchers in intervention research, particularly to those working with longitudinal rating data in educational psychology, school-based intervention, substance abuse treatment, and others. The study of rater effects once again shows a fundamental challenge facing social scientists: social science is different from natural science because its study subjects are human being and the researchers/observers are human being too. We are far away from measuring social phenomena objectively comparing to natural sciences. Hence, the raters' impact on measurement error cannot be assumed random, and should be explicitly controlled for in longitudinal inquiry.

See more of: Workshops