Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Oct 2;21(1):142.
doi: 10.1186/s12880-021-00671-8.

The reporting quality of natural language processing studies: systematic review of studies of radiology reports

Affiliations

The reporting quality of natural language processing studies: systematic review of studies of radiology reports

Emma M Davidson et al. BMC Med Imaging. .

Abstract

Background: Automated language analysis of radiology reports using natural language processing (NLP) can provide valuable information on patients' health and disease. With its rapid development, NLP studies should have transparent methodology to allow comparison of approaches and reproducibility. This systematic review aims to summarise the characteristics and reporting quality of studies applying NLP to radiology reports.

Methods: We searched Google Scholar for studies published in English that applied NLP to radiology reports of any imaging modality between January 2015 and October 2019. At least two reviewers independently performed screening and completed data extraction. We specified 15 criteria relating to data source, datasets, ground truth, outcomes, and reproducibility for quality assessment. The primary NLP performance measures were precision, recall and F1 score.

Results: Of the 4,836 records retrieved, we included 164 studies that used NLP on radiology reports. The commonest clinical applications of NLP were disease information or classification (28%) and diagnostic surveillance (27.4%). Most studies used English radiology reports (86%). Reports from mixed imaging modalities were used in 28% of the studies. Oncology (24%) was the most frequent disease area. Most studies had dataset size > 200 (85.4%) but the proportion of studies that described their annotated, training, validation, and test set were 67.1%, 63.4%, 45.7%, and 67.7% respectively. About half of the studies reported precision (48.8%) and recall (53.7%). Few studies reported external validation performed (10.8%), data availability (8.5%) and code availability (9.1%). There was no pattern of performance associated with the overall reporting quality.

Conclusions: There is a range of potential clinical applications for NLP of radiology reports in health services and research. However, we found suboptimal reporting quality that precludes comparison, reproducibility, and replication. Our results support the need for development of reporting standards specific to clinical NLP studies.

Keywords: Natural language processing; Radiology reports; Systematic review.

PubMed Disclaimer

Conflict of interest statement

The authors declare that they have no competing interests.

Figures

Fig. 1
Fig. 1
PRISMA flowchart outlining the study selection process [13]
Fig. 2
Fig. 2
Distribution of studies by publication year and a clinical application, b NLP methods
Fig. 3
Fig. 3
Quality of reporting in a individual studies and b between 2015 and 2019. Legend: a Studies are arranged by the total number of qualities reported in the study from left to right in descending order. b Numbers indicate the percentage of studies in each year of publication reporting the corresponding quality
Fig. 4
Fig. 4
Precision, recall and F1 score by quality of reporting and clinical application category. Legend: NLP system performance reported as precision, recall and F1 score from included studies. Size of the bubbles represents the relative sizes of corpora in each graph. a Studies were categorised into high (> 5 qualities) and low (≤ 5 qualities) reporting quality based on the median number of qualities reported as the cut-off point. Reporting of F1 score was not a quality criterion. b Performance stratified by clinical application

References

    1. Cai T, Giannopoulos AA, Yu S, Kelil T, Ripley B, Kumamaru KK, et al. Natural language processing technologies in radiology research and clinical applications. Radiographics. 2016;36(1):176–191. doi: 10.1148/rg.2016150080. - DOI - PMC - PubMed
    1. Vollmer S, Mateen BA, Bohner G, Király FJ, Ghani R, Jonsson P, et al. Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness. BMJ. 2020;368:16927. - PMC - PubMed
    1. Cruz Rivera S, Liu X, Chan A-W, Denniston AK, Calvert MJ, Ashrafian H, et al. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Lancet Digit Health. 2020;2(10):e549–e560. doi: 10.1016/S2589-7500(20)30219-3. - DOI - PMC - PubMed
    1. Liu X, Cruz Rivera S, Moher D, Calvert MJ, Denniston AK, Ashrafian H, et al. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Lancet Digit Health. 2020;2(10):e537–e548. doi: 10.1016/S2589-7500(20)30218-1. - DOI - PMC - PubMed
    1. Bluemke DA, Moy L, Bredella MA, Ertl-Wagner BB, Fowler KJ, Goh VJ, et al. Assessing radiology research on artificial intelligence: a brief guide for authors, reviewers, and readers—from the radiology editorial board. Radiology. 2019;294(3):487–489. doi: 10.1148/radiol.2019192515. - DOI - PubMed

Publication types

LinkOut - more resources