Session: Beyond Power Analysis: Measuring Sign and Magnitude Errors Using R (Society for Social Work and Research 28th Annual Conference - Recentering & Democratizing Knowledge: The Next 30 Years of Social Work Science)

All in-person and virtual presentations are in Eastern Standard Time Zone (EST).

SSWR 2024 Poster Gallery: as a registered in-person and virtual attendee, you have access to the virtual Poster Gallery which includes only the posters that elected to present virtually. The rest of the posters are presented in-person in the Poster/Exhibit Hall located in Marquis BR Salon 6, ML 2. The access to the Poster Gallery will be available via the virtual conference platform the week of January 11. You will receive an email with instructions how to access the virtual conference platform.

36 Beyond Power Analysis: Measuring Sign and Magnitude Errors Using R

Schedule:
Thursday, January 11, 2024: 3:15 PM-4:45 PM
Capitol, ML 4 (Marriott Marquis Washington DC)
Cluster:
Organizer:
Charles Auerbach, PhD, Yeshiva University
Speakers/Presenters:
Charles Auerbach, PhD, Yeshiva University, Christine Vyshedsky, PhD, Yeshiva University and Hanni Flaherty, PHD, Yeshiva University
Power and sample size analyses are focused on statistical significance, i.e., correctly rejecting the null hypothesis. Power analysis is a critical technique for determining the number of subjects needed to identify the effect of an intervention. There are several reasons to utilize power analysis. The most common use of this technique is to determine the number of subjects needed to identify an effect of a given size. However, power tests do not inform us of the potential for overestimating or estimating effects with the opposite sign. The paper Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors by Andrew Gelman and John Carlin (2014) introduces the idea of performing design calculations to help prevent researchers from being misled by statistical significance, which results from studies with small samples and highly variable measurements. The degree of exaggeration and the likelihood of sign errors in research findings can be determined by simulation. The purpose of this workshop is twofold; to teach effective techniques for utilizing power analysis and to go beyond power analysis to test for Type S and Type M errors. This workshop has four objectives: 1. Participants will understand when to use power analysis to inform their research. 2. Participants will learn to use R to perform power analysis for several commonly implemented statistical techniques. 3. Participants will learn how to use simulation to test for the likelihood of magnitude and sign errors. 4. Participants will learn how to interpret and assess the results of their power analysis and how to report their findings depending on the report they are producing. In this workshop, we will take a hands-on approach to meet our objectives. We will begin the presentation by discussing how to install and get started with R, a free, open-source statistical programming language. Next, we will review the conceptual framework for power analysis and how to calculate different types of effect sizes necessary for various research situations. Using R's "pwr" package, the presenters will then demonstrate how to calculate power for the t-test, one-way ANOVA, proportions, chi-square (X2), and correlations. Next, using R's "retrodesign "package, we will use the author's research to demonstrate how to simulate magnitude and sign errors. We will include several examples and scenarios participants can use in their research. Finally, the authors will discuss how to best report the power analysis and design calculations results depending on the research purpose. The presenters will make available to attendees all the datasets and scripts used during the symposium so that they can get started using R to calculate statistical power and design errors on their own once the conference has concluded.
See more of: Workshops