Abstract: Adding a Layer of Depth to Quantitative Research; A Strategy for Coding Open-Ended Questions (Society for Social Work and Research 15th Annual Conference: Emerging Horizons for Social Work Research)

14847 Adding a Layer of Depth to Quantitative Research; A Strategy for Coding Open-Ended Questions

Schedule:
Thursday, January 13, 2011: 4:30 PM
Grand Salon D (Tampa Marriott Waterside Hotel & Marina)
* noted as presenting author
Stephen Ellenbogen, PhD, Assistant Professor, Memorial University of Newfoundland, St. John's, NF, Canada, Christine Wekerle, Professor, University of Western Ontario, London, ON, Canada and Nico Trocme, PhD, Professor, McGill University, Montreal, QC, Canada
Purpose: Requiring data that can readily be converted into values and used in statistical analyses, quantitative researchers often administer measures with predetermined answers. This can be problematic if the range of possible answers is unknown or exceedingly large. Also, the selection and ordering of answers might lead to biased responding. One way to overcome these challenges is to administer open-ended questions and then code the answers. However this strategy has been depicted as impractical and subject to bias and misinterpretation by some (Rubin and Babbie, 2011). Describing their efforts to develop a cognitive measure, the authors reject this assertion and present a relatively simple and effective strategy of coding open-ended questions.

Methods: To assess violence outcome expectancies (a cognition pertinent to the study of aggression), a sample 136 adolescents were asked what they thought would happen if they acted aggressively towards their best friend, boy/girlfriend and caregiver. Respondents could provide up to 24 short answers. To create a quantitative measure, the following process was implemented: (1) preliminary coding by researchers, (2) grouping of categories and creation of coding strategy, (3) preliminary coding by research assistants, (4) testing of inter-rater reliability, (5) discussion of issues with research assistants, e.g., inter-rater disagreements, ambiguous responses, unclassifiable responses, (6) refinement of the coding strategy, (7) final coding of responses, and (8) final testing of inter-rater reliability.

Results: The respondents generated a number of unexpected responses; this necessitated the creation of several new categories. After refinement, the final coding strategy contained a total of 39 categories spread across 8 broad themes. Yet, despite this complexity, satisfactory agreement was obtained in the final testing of inter-rater reliability (Kappa= .834).

Implications: This study demonstrated that answers to complex open-ended questions could be reliably coded for use in quantitative research. Moreover, the iterative process required to create the coding scheme allows for a new understanding of the concept being measured, and of differences in how people perceive the social world. Given the time and energy required to transform the data, researchers must nevertheless judge whether the potential benefits merit the investment.