Radiology artificial intelligence: a systematic review and evaluation of methods (RAISE)
- PMID: 35420305
- PMCID: PMC9668941
- DOI: 10.1007/s00330-022-08784-6
Radiology artificial intelligence: a systematic review and evaluation of methods (RAISE)
Erratum in
-
Correction to: Radiology artificial intelligence: a systematic review and evaluation of methods (RAISE).Eur Radiol. 2022 Nov;32(11):8054. doi: 10.1007/s00330-022-08832-1. Eur Radiol. 2022. PMID: 35593961 Free PMC article. No abstract available.
Abstract
Objective: There has been a large amount of research in the field of artificial intelligence (AI) as applied to clinical radiology. However, these studies vary in design and quality and systematic reviews of the entire field are lacking.This systematic review aimed to identify all papers that used deep learning in radiology to survey the literature and to evaluate their methods. We aimed to identify the key questions being addressed in the literature and to identify the most effective methods employed.
Methods: We followed the PRISMA guidelines and performed a systematic review of studies of AI in radiology published from 2015 to 2019. Our published protocol was prospectively registered.
Results: Our search yielded 11,083 results. Seven hundred sixty-seven full texts were reviewed, and 535 articles were included. Ninety-eight percent were retrospective cohort studies. The median number of patients included was 460. Most studies involved MRI (37%). Neuroradiology was the most common subspecialty. Eighty-eight percent used supervised learning. The majority of studies undertook a segmentation task (39%). Performance comparison was with a state-of-the-art model in 37%. The most used established architecture was UNet (14%). The median performance for the most utilised evaluation metrics was Dice of 0.89 (range .49-.99), AUC of 0.903 (range 1.00-0.61) and Accuracy of 89.4 (range 70.2-100). Of the 77 studies that externally validated their results and allowed for direct comparison, performance on average decreased by 6% at external validation (range increase of 4% to decrease 44%).
Conclusion: This systematic review has surveyed the major advances in AI as applied to clinical radiology.
Key points: • While there are many papers reporting expert-level results by using deep learning in radiology, most apply only a narrow range of techniques to a narrow selection of use cases. • The literature is dominated by retrospective cohort studies with limited external validation with high potential for bias. • The recent advent of AI extensions to systematic reporting guidelines and prospective trial registration along with a focus on external validation and explanations show potential for translation of the hype surrounding AI from code to clinic.
Keywords: Artificial Intelligence; Methodology; Radiology; Systematic reviews.
© 2022. The Author(s).
Conflict of interest statement
The authors of this manuscript declare no relationships with any companies, whose products or services may be related to the subject matter of the article.
Figures
Comment in
-
AI in radiology: is it the time for randomized controlled trials?Eur Radiol. 2023 Jun;33(6):4223-4225. doi: 10.1007/s00330-022-09381-3. Epub 2023 Jan 4. Eur Radiol. 2023. PMID: 36597003 No abstract available.
References
-
- Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Commun Acm. 2017;60:84–90. doi: 10.1145/3065386. - DOI
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources