Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Apr 5;10(1):98.
doi: 10.1186/s13643-021-01632-6.

Successful incorporation of single reviewer assessments during systematic review screening: development and validation of sensitivity and work-saved of an algorithm that considers exclusion criteria and count

Affiliations

Successful incorporation of single reviewer assessments during systematic review screening: development and validation of sensitivity and work-saved of an algorithm that considers exclusion criteria and count

Nassr Nama et al. Syst Rev. .

Abstract

Background: Accepted systematic review (SR) methodology requires citation screening by two reviewers to maximise retrieval of eligible studies. We hypothesized that records could be excluded by a single reviewer without loss of sensitivity in two conditions; the record was ineligible for multiple reasons, or the record was ineligible for one or more specific reasons that could be reliably assessed.

Methods: Twenty-four SRs performed at CHEO, a pediatric health care and research centre in Ottawa, Canada, were divided into derivation and validation sets. Exclusion criteria during abstract screening were sorted into 11 specific categories, with loss in sensitivity determined by individual category and by number of exclusion criteria endorsed. Five single reviewer algorithms that combined individual categories and multiple exclusion criteria were then tested on the derivation and validation sets, with success defined a priori as less than 5% loss of sensitivity.

Results: The 24 SRs included 930 eligible and 27390 ineligible citations. The reviews were mostly focused on pediatrics (70.8%, N=17/24), but covered various specialties. Using a single reviewer to exclude any citation led to an average loss of sensitivity of 8.6% (95%CI, 6.0-12.1%). Excluding citations with ≥2 exclusion criteria led to 1.2% average loss of sensitivity (95%CI, 0.5-3.1%). Five specific exclusion criteria performed with perfect sensitivity: conference abstract, ineligible age group, case report/series, not human research, and review article. In the derivation set, the five algorithms achieved a loss of sensitivity ranging from 0.0 to 1.9% and work-saved ranging from 14.8 to 39.1%. In the validation set, the loss of sensitivity for all 5 algorithms remained below 2.6%, with work-saved between 10.5% and 48.2%.

Conclusions: Findings suggest that targeted application of single-reviewer screening, considering both type and number of exclusion criteria, could retain sensitivity and significantly decrease workload. Further research is required to investigate the potential for combining this approach with crowdsourcing or machine learning methodologies.

Keywords: Citation screening; Exclusion criteria; Rapid reviews; Single-reviewer; Systematic review.

PubMed Disclaimer

Conflict of interest statement

The authors have contributed to the design of the insightScope platform. NN, KO and JDM own shares in this platform.

Figures

Fig. 1
Fig. 1
Flow diagram of included citations in the derivation and validation sets
Fig. 2
Fig. 2
Loss of sensitivity when permitting single reviewer exclusion, based on specific criteria. Error bars reflect 95%CI. The blue dotted line reflects the 1% threshold used in the algorithm development stage. Analysis is based on the systematic reviews in the derivation set only
Fig. 3
Fig. 3
Loss of sensitivity of algorithms employing a single reviewer approach. Loss of sensitivity is the percentage of eligible citations incorrectly excluded by the algorithm at the abstract level among all eligible citations. Error bars reflect 95%CI. Analysis is based on the systematic reviews in the derivation set (red) and the validation set (green)

Similar articles

Cited by

References

    1. Pradhan R, Hoaglin DC, Cornell M, Liu W, Wang V, Yu H. Automatic extraction of quantitative data from ClinicalTrials.gov to conduct meta-analyses. J Clin Epidemiol. 2019;105:92–100. doi: 10.1016/j.jclinepi.2018.08.023. - DOI - PMC - PubMed
    1. Tsertsvadze A, Chen Y-F, Moher D, Sutcliffe P, McCarthy N. How to conduct systematic reviews more expeditiously? Syst Rev. 2015;4:160. doi: 10.1186/s13643-015-0147-7. - DOI - PMC - PubMed
    1. Jonnalagadda SR, Goyal P, Huffman MD. Automating data extraction in systematic reviews: a systematic review. Syst Rev. 2015;4:78. doi: 10.1186/s13643-015-0066-7. - DOI - PMC - PubMed
    1. Tricco AC, Garritty CM, Boulos L, Lockwood C, Wilson M, McGowan J, et al. Rapid review methods more challenging during COVID-19: commentary with a focus on 8 knowledge synthesis steps. J Clin Epidemiol. 2020;126:177–183. doi: 10.1016/j.jclinepi.2020.06.029. - DOI - PMC - PubMed
    1. Paules CI, Marston HD, Fauci AS. Coronavirus infections—more than just the common cold. JAMA. 2020;323:707–708. doi: 10.1001/jama.2020.0757. - DOI - PubMed

LinkOut - more resources