Abstract: Systematic Review Management Software: An Exploration of Core Functionality and Risks of Automation Bias (Society for Social Work and Research 23rd Annual Conference - Ending Gender Based, Family and Community Violence)

739P Systematic Review Management Software: An Exploration of Core Functionality and Risks of Automation Bias

Schedule:
Sunday, January 20, 2019
Continental Parlors 1-3, Ballroom Level (Hilton San Francisco)
* noted as presenting author
Nathaniel Dell, AM, LMSW, Doctoral Student, Saint Louis University, Saint Louis, MO
Brandy Maynard, PhD, Associate Professor, Saint Louis University, St. Louis, MO
Madeline Stewart, BA, Intern, Places for People, Inc., St. Louis
Background and Purpose: Systematic review (SR) methods are increasingly employed in the social sciences to answer practice and policy questions, identify evidence gaps, and advance knowledge through evidence synthesis.  Like any rigorous empirical research, systematic reviews (SRs) can be challenging to plan and execute; however, systematic review software has been developed to streamline and automate different review processes. While such tools may promote efficient review management, programs vary in terms of functionality, pricing, and support. Yet, reviewers may be unaware of what currently exists, or unsure of which, if any, product best supports their SR. The purpose of this study is to (a) identify, evaluate and summarize available software specific to social sciences research that supports reviewers through core SR process and (b) discuss potential biases introduced into the reviewing process by automating core SR processes.

Methods: Web searches were conducted in Nov-Dec 2017 to identify available SR management software (e.g., SRToolbox [systematicreviewtools.com]). Literature on SR software development, published since 2008, were searched to identify eligible products. Additionally, Campbell Collaboration systematic reviews published between 2013 and Jan 2018 were surveyed to assess for reporting of software used to conduct the review. Software were included for this review if they supported at least four of six core components of the review process (e.g., protocol development, study search, abstract/full text screening, data extraction, quality assessment, and synthesis) and were specific to the conduct of reviews in the social sciences.  Two authors evaluated software functionality, ancillary features (such as text mining and study deduplication), pricing, and support. Cognitive load theory, originating from instructional design, was applied to theorize how over-reliance on automated processes may, under certain conditions, increase risks of error.

Results: Seven SR software products used in the social sciences are described and evaluated. Of these, text-mining features are available in only two of them, although several natural language processing and machine-learning products are available as stand-alone tools. Protocol development was available in 43% and direct import of data from databases was available in 57% of products. While several free products exist, 71% of those included in this study had varying access costs. Campbell Reviews (n=73) report the use of EPPI-Reviewer4, RevMan, and DistillerSR in 51% of reviews, with 93% of reviews reporting any software used in the review process—typical reporting reference management or statistical software. Biases may be introduced into the review through two means, those intrinsic to the process of monitoring automated tasks and through the way in which automated information is presented to the reviewer for verification.

Conclusions and Implications:  Transparency and reproducibility of systematic reviews may be aided by reporting software used to manage the review, particularly automated study search/selection, data extraction, and quality assessment procedures. Such transparency will increase in importance as automation of core processes expands across platforms. Previous research has advocated for more precise testing of the recall and precision of products automating review processes. Additionally, designers should consider how presentation of data may increase risks missing errors in study selection or data extraction.