Saturday, 15 January 2005 - 8:00 AMThis presentation is part of: Intervention Research Using Advanced Statistical MethodsPower in Social Work Research: Techniques for Designing Intervention Studies with Multilevel Data StructuresRoderick A. Rose, MS, Jordan Institute for Families, School of Social Work, University of North Carolina at Chapel Hill and Gary L. Bowen, PhD, University of North Carolina.
Purpose. Social work research for informing evidence based practice (EBP) increasingly makes use of multilevel data structures and modeling techniques to study ecological and contextual effects, but analytical methods for designing such studies with sufficient rigor and power are not always readily available. In particular, no method is available for estimating the power of a multilevel design consisting of three levels of data: time within subjects within clusters. This type of design occurs in many settings, e.g., school intervention studies (time with students within schools). We proposed to study the effects of a whole-school intervention program on student level outcomes over a period of four years, yielding such a data structure, but a preliminary study was needed to determine the number of schools necessary to achieve the desired level of statistical power given a realistic range of effect sizes. Methods. Though no method for analyzing power in a three level context was readily available, power analysis methods were available for separate longitudinal and cluster randomized two-level frameworks. Consequently, we utilized a method for power analysis based upon a heuristic that combined elements of power analysis methods for these two simpler designs. To obtain the unknown inputs to power in this context, we conducted a calibration study on a similar population of students receiving a similar educational intervention. We estimated these parameters using a model reflective of the analytical model for our proposed study. Results. We obtained the needed inputs--variance estimates at time, student and school levels--from the calibration study and entered them into the 3-level power calculation. The effect size represented the difference between the experimental schools and the control schools in the average change in students' end-of-grade math scores during the intervention period. We used conservative estimates of the inputs to mitigate the effect that uncertainty would play in interpreting the final power estimates. Finally, we considered strategies that would yield reductions in the required sample size for a given set of power inputs. For instance, when using an unbalanced sample of three control schools for every experimental school only 11 schools would be needed to observe a certain effect size, compared to 15 schools using a balanced sample. These findings were instrumental in receiving the grant award for our proposed study. Implications. Social work interventions should be selected for their effectiveness in addressing client needs and achieving practitioners' ultimate outcome goals. This is a central premise of evidence-based practice. Unfortunately, experimental studies utilizing random placement in control and intervention groups are rare in social work research. This may be due to the challenging rigors of their sampling and evaluation designs in the context of limited available resources. Careful attention to the methods discussed in this paper will help social work researchers to develop intervention study proposals that provide contextual studies on both individual and time-level parameters and attain sufficient power to demonstrate effects.
See more of Intervention Research Using Advanced Statistical Methods |