School Climate or School Climates? Testing the Fit and Invariance of One Multilevel School Climate Measure

Schedule:
Friday, January 16, 2015: 5:50 PM
La Galeries 2, Second Floor (New Orleans Marriott)
* noted as presenting author
Sarah Fierberg Phillips, PhD, Research Director, Tripod Project for School Improvement, Cambridge, MA
Ronald F. Ferguson, PhD, Senior Lecturer in Education and Public Policy, Harvard University, Cambridge, MA
Background:In recently revised standards for school social work, NASW promotes a multi-tiered practice model that explicitly encourages practitioners to engage in universal school climate interventions. Although school climate has been associated with key indicators of healthy development, the assumption that an over-arching school climate exists remains largely untested. Few studies have examined within-school variation in perceptions of school climate and almost no studies testing the invariance and population heterogeneity of popular school climate measures have been published. This study aims to address these gaps in extant research.

Methods:Secondary data from the Tripod Project for School Improvement was used to develop, confirm, and test the invariance and population heterogeneity of a six-item school climate measure designed to capture student-student and student-teacher relationships. Exploratory factor analysis (EFA) was performed using two-level structural equation models and data from all sixth through twelfth grade students surveyed in fall 2012 (N=27,419 students, 102 schools). Confirmatory factor analysis (CFA) repeated this analysis with spring 2013 data (N=142,954 students, 576 schools). Finally, multiple indicators and multiple causes (MIMIC) models clustered students in schools to examine differences in factor means and item intercepts by student grade, race/ethnicity, socioeconomic status, gender, and dropout risk.

The CFA and MIMIC sample was: 19.66% White, 45.82% African-American, 10.85% Latino, and 48.18% male. The average student was in ninth grade and her most educated parent attended some college. Dropout risk captured whether a student reported a prior-term grade point average in the C range or below and responded “Mostly True” or “True” to the statements “I don’t really care whether I arrive on time to this class” and “My behavior is a problem for the teacher in this class.” The distribution of dropout risk was: .97% high risk, 7.42% moderate risk, 25.37% low risk, and 70.18% no risk.

Results: EFA indicated that the best fitting model for the data included two within-school factors and one between-school factor (χ2(15)=415.07, p≤.001; RMSEA=.03, CFI=.95, SRMR Within=.03, SRMR Between=.05). CFA results were similar and indicated good to excellent fit on all indices except chi-square (χ2(15)=1911.67, p≤.001; RMSEA=.03, CFI=.95, SRMR Within=.03, SRMR Between=.05). However, MIMIC models identified significant differences in factor means and item functioning by dropout risk and most dimensions of student background. While demographic differences were generally small and ranged from .04 to .24 points on a five-point scale, differences by dropout risk were larger, ranging from .02 to 1.17 points.

Conclusions: Results underscore the necessity of testing the invariance and population heterogeneity of school climate measures. While strong EFA and CFA results like those presented in this study might lead practitioners to select one measure over another, failure to examine between group differences may lead to the widespread adoption of measures that work differently in different populations. The fact that invariance and population homogeneity were not supported in this study challenges the assumption that an over-arching school climate exists. Students of different backgrounds appear to experience the same school differently. Consequently, targeted, rather than universal, school climate interventions may be most effective.