Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2022 Feb;19(2):132-146.
doi: 10.1038/s41571-021-00560-7. Epub 2021 Oct 18.

Predicting cancer outcomes with radiomics and artificial intelligence in radiology

Affiliations
Review

Predicting cancer outcomes with radiomics and artificial intelligence in radiology

Kaustav Bera et al. Nat Rev Clin Oncol. 2022 Feb.

Abstract

The successful use of artificial intelligence (AI) for diagnostic purposes has prompted the application of AI-based cancer imaging analysis to address other, more complex, clinical needs. In this Perspective, we discuss the next generation of challenges in clinical decision-making that AI tools can solve using radiology images, such as prognostication of outcome across multiple cancers, prediction of response to various treatment modalities, discrimination of benign treatment confounders from true progression, identification of unusual response patterns and prediction of the mutational and molecular profile of tumours. We describe the evolution of and opportunities for AI in oncology imaging, focusing on hand-crafted radiomic approaches and deep learning-derived representations, with examples of their application for decision support. We also address the challenges faced on the path to clinical adoption, including data curation and annotation, interpretability, and regulatory and reimbursement issues. We hope to demystify AI in radiology for clinicians by helping them to understand its limitations and challenges, as well as the opportunities it provides as a decision-support tool in cancer management.

PubMed Disclaimer

Conflict of interest statement

Competing interests

N.B. is a current employee of Tempus Labs and a former employee of IBM Research, with both of which he is an inventor on several pending patents pertaining to medical image analysis. He additionally holds equity in Tempus Labs. V.V. is a consultant for Alkermes, AstraZeneca, Bristol Myers Squibb, Celgene, Foundation Medicine, Genentech, Merck, Nektar Therapeutics and Takeda, has current or pending grants from Alkermes, AstraZeneca, Bristol Myers Squibb, Genentech and Merck, is on the speakers’ bureaus of Bristol Myers Squibb, Celgene, Foundation Medicine and Novartis, and has received payment for the development of educational presentations from Bristol Myers Squibb and Foundation Medicine. A.M. holds equity in Elucid Bioimaging and Inspirata, has been or is a scientific advisory board member for Aiforia, AstraZeneca, Bristol Myers Squibb, Inspirata and Merck, serves as a consultant for Caris, Inc. and Roche Diagnostics, has sponsored research agreements with AstraZeneca, Boehringer Ingelheim, Bristol Myers Squibb and Philips, has developed a technology relating to cardiovascular imaging that has been licensed to Elucid Bioimaging, and is involved in an NIH U24 grant with PathCore and three different NIH R01 grants with Inspirata. The other authors declare no competing interests.

Figures

Fig. 1 |
Fig. 1 |. Workflow for AI-enabled biomarkers in radiology.
Typical protocol for developing artificial intelligence (AI) radiology biomarkers using radiomic and deep learning approaches, and their clinical applications. Both approaches can be applied in the context of cancer outcome prediction and biomarker discovery for assessment of response to treatment, prognostication and radiogenomics. DICOM, Digital Imaging and Communications in Medicine; ML, machine learning; OS, overall survival; PFS, progression-free survival; RFS, recurrence-free survival.
Fig. 2 |
Fig. 2 |. Examples of the types of radiomic feature used in oncology.
a | Grey-level co-occurrence matrix entropy in a metastatic liver lesion detected by CT. b | Shape of a glioblastoma detected on gadolinium-enhanced T1-weighted MRI. c | Kinetic measure of contrast enhancement over time in breast tissue using contrast-enhanced MRI. d | Peritumoural radiomics measuring textural heterogeneity in the lung stroma surrounding a non-small-cell carcinoma. e | Shape of breast vasculature and tumour-associated vessel network detected using contrast-enhanced MRI. f | Enhanced standardized uptake values on 2-deoxy-2-18F-fiuoro-D-giucose PET-CT scans showing increased metabolic activity in a head and neck carcinoma.
Fig. 3 |
Fig. 3 |. Building blocks and types of neural network commonly applied to medical imaging data.
a | Example of a convolutional neural network (CNN) model configured for prediction. Input images or volumes are passed through the CNN layers, which perform operations and translate them into a target output vector. Convolutional layers are sets of operations that transform imaging data into deep-feature representations. Each filter is passed over the image and paired with a nonlinear activation function to emphasize visual patterns of interest for a certain task. As more convolutional layers are stacked, a CNN can learn more complex visual patterns within an image. Throughout a CNN classifier, deep features are periodically aggregated through pooling operations. After processing by convolutional and pooling layers, deep-feature representations are eventually flattened into a vector. Next, fully connected layers translate these CNN-derived image features into a vector that corresponds to a target output. These models can be applied to the prediction of treatment response, prognostication, classification of tumour subtypes and biomarkers, and prediction of physiological values. b | Fully convolutional neural networks are a type of CNN comprising only convolutional layers that yield image-like outputs, such as a map of a tumour’s location. c | Fully connected networks can be trained to make predictions based on non-image data, such as radiomic features and clinical variables.
Fig. 4 |
Fig. 4 |. Different levels of annotation detail in radiomics and deep learning studies.
Defining the region of interest and level of annotation detail in radiomics and deep learning studies. TME, tumour microenvironment.

References

    1. Giger ML, Chan H-P & Boone J Anniversary Paper: History and status of CAD and quantitative image analysis: the role of medical physics and AAPM. Med. Phys. 35, 5799–5820 (2008). - PMC - PubMed
    1. Giger ML, Doi K & MacMahon H Computerized detection of lung nodules in digital chest radiographs. Med. Imaging Proc. 767, 384–387 (1987).
    1. Carmody DP, Nodine CF & Kundel HL An analysis of perceptual and cognitive factors in radiographic interpretation. Perception 9, 339–344 (1980). - PubMed
    1. Kundel HL & Hendee WR The perception of radiologic image information. Report of an NCI workshop on April 15–16, 1985. Invest. Radiol. 20, 874–877 (1985). - PubMed
    1. Rao VM et al. How widely Is computer-aided detection used in screening and diagnostic mammography? J. Am. Coll. Radiol. 7, 802–805 (2010). - PubMed

Publication types