A real-world evaluation of the implementation of NLP technology in abstract screening of a systematic review
- PMID: 37230483
- DOI: 10.1002/jrsm.1636
A real-world evaluation of the implementation of NLP technology in abstract screening of a systematic review
Abstract
The laborious and time-consuming nature of systematic review production hinders the dissemination of up-to-date evidence synthesis. Well-performing natural language processing (NLP) tools for systematic reviews have been developed, showing promise to improve efficiency. However, the feasibility and value of these technologies have not been comprehensively demonstrated in a real-world review. We developed an NLP-assisted abstract screening tool that provides text inclusion recommendations, keyword highlights, and visual context cues. We evaluated this tool in a living systematic review on SARS-CoV-2 seroprevalence, conducting a quality improvement assessment of screening with and without the tool. We evaluated changes to abstract screening speed, screening accuracy, characteristics of included texts, and user satisfaction. The tool improved efficiency, reducing screening time per abstract by 45.9% and decreasing inter-reviewer conflict rates. The tool conserved precision of article inclusion (positive predictive value; 0.92 with tool vs. 0.88 without) and recall (sensitivity; 0.90 vs. 0.81). The summary statistics of included studies were similar with and without the tool. Users were satisfied with the tool (mean satisfaction score of 4.2/5). We evaluated an abstract screening process where one human reviewer was replaced with the tool's votes, finding that this maintained recall (0.92 one-person, one-tool vs. 0.90 two tool-assisted humans) and precision (0.91 vs. 0.92) while reducing screening time by 70%. Implementing an NLP tool in this living systematic review improved efficiency, maintained accuracy, and was well-received by researchers, demonstrating the real-world effectiveness of NLP in expediting evidence synthesis.
Keywords: abstract screening; living literature review; natural language processing; systematic review; text classification.
© 2023 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.
References
REFERENCES
-
- Garritty C, Stevens A, Hamel C, Golfam M, Hutton B, Wolfe D. Knowledge synthesis in evidence-based medicine. Semin Nuclear Med. 2019;49(2):136-144. doi:10.1053/j.semnuclmed.2018.11.006
-
- Guyatt GH, Sackett DL, Sinclair JC, Hayward R, Cook DJ, Cook RJ. Users' guides to the medical literature: IX. A method for grading health care recommendations. Jama. 1995;274(22):1800-1804. doi:10.1001/jama.1995.03530220066035
-
- COVID-19 Primer. Accessed October 6, 2021. https://covid19primer.com/
-
- Borah R, Brown AW, Capers PL, Kaiser KA. Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry. BMJ Open. 2017;7(2):e012545. doi:10.1136/bmjopen-2016-012545
-
- Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010;7(9):e1000326. doi:10.1371/journal.pmed.1000326
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Medical
Miscellaneous
