Abstract: Increasing the Transparency of Decision-Making in Qualitative Research: Team-Based Coding in a Digital Age (Society for Social Work and Research 22nd Annual Conference - Achieving Equal Opportunity, Equity, and Justice)

Increasing the Transparency of Decision-Making in Qualitative Research: Team-Based Coding in a Digital Age

Schedule:
Saturday, January 13, 2018: 5:06 PM
Independence BR C (ML 4) (Marriott Marquis Washington DC)
* noted as presenting author
Ashleigh Hodge, MSW, Doctoral Student, Ohio State University, Columbus, OH
Yiwen Cao, MSW, Graduate Student, Ohio State University, Columbus, OH
Alicia Bunger, MSW, PhD, Assistant Professor, Ohio State University, xxxxxxx, OH
Christy Kranich, MSW, Evaluation Project Coordinator, Ohio State University, Columbus, OH
Hannah MacDowell, BA, Research Assistant, Ohio State University, Columbus, OH
Background and Purpose: Social work research has increasingly emphasized collaboration among multiple researchers. Thus, qualitative data analysis often occurs in the context of a project team. A team-based approach can enhance study validity as multiple team members develop and apply codebooks and interpret the findings. Although studies often include basic methodological details, such as the number of coders and the inter-rater agreement, the specific procedures used to reach agreement are often not described. While advances in software facilitate team-based coding and analysis, the validity of findings based on team coding often suffer from a generic statement like: “inter-rater agreement was discussed to reach consensus.” Therefore, it is critical to establish and report procedures on how the inter-rater agreement is established in the process of coding in a team context. This paper contributes to the literature by illustrating an approach used by a team of researchers to establish the reliability and consistency of coding agreement and to resolve disagreement using analytic software.

Methods: A team-based coding approach was developed to facilitate a research team’s analysis of twelve 90-minute focus groups conducted with child welfare workers and behavioral health clinicians. After scanning the literature on the reliability of coding in qualitative research, we developed a systematic approach to assess the consistency of inter-observer agreement, then implemented and revised it while coding several focus group transcripts. Qualitative data and the coding process were managed using ATLAS.ti v.6.2 (desk-top) and Dedoose (an online platform), then agreement was calculated using Excel spreadsheets.

Results: First, all research team members read transcripts and then discussed major conceptual categories and definitions for each code; this generated an initial list of codes. Second, at least two team members applied the codebook to test its usefulness, and then made necessary revisions. Third, two team members independently applied the revised codebook. Fourth, a third team member calculated the agreement rate for each code; after this first round of coding, average inter-rater agreement on a transcript ranged from 30% to 60%. Fifth, each transcript was reviewed and codes below 80% agreement were discussed among the two independent coders with a third team member present to resolve differences. This procedure was iteratively applied until at least 90% agreement was reached.

Conclusions and Implications: This team-based coding approach was developed to increase transparency in the inter-rater agreement process. Because it facilitates discussion among coders about their coding decisions of specific excerpts, reliability rates reflect true consensus among coders that may not be captured by simply calculating a statistical measure. These team-based coding techniques may help enhance research quality. In our use of both software programs, we found that Dedoose eased calculations of agreement rates throughout the coding process. Lessons learnt on team-based coding using ATLAS.ti and Dedoose will be discussed.