Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2019 Jan;46(1):e1-e36.
doi: 10.1002/mp.13264. Epub 2018 Nov 20.

Deep learning in medical imaging and radiation therapy

Affiliations
Review

Deep learning in medical imaging and radiation therapy

Berkman Sahiner et al. Med Phys. 2019 Jan.

Abstract

The goals of this review paper on deep learning (DL) in medical imaging and radiation therapy are to (a) summarize what has been achieved to date; (b) identify common and unique challenges, and strategies that researchers have taken to address these challenges; and (c) identify some of the promising avenues for the future both in terms of applications as well as technical innovations. We introduce the general principles of DL and convolutional neural networks, survey five major areas of application of DL in medical imaging and radiation therapy, identify common themes, discuss methods for dataset expansion, and conclude by summarizing lessons learned, remaining challenges, and future directions.

Keywords: computer-aided detection/characterization; deep learning, machine learning; reconstruction; segmentation; treatment.

PubMed Disclaimer

Conflict of interest statement

MLG is a stockholder in R2/Hologic, scientific advisor, cofounder, and equity holder in Quantitative Insights, makers of QuantX, shareholder in Qview, and receives royalties from Hologic, GE Medical Systems, MEDIAN Technologies, Riverain Medical, Mitsubishi, and Toshiba. KD receives royalties from Hologic. RMS receives royalties from iCAD, Inc., Koninklijke Philips NV, ScanMed, LLC, PingAn, and receives research support from Ping An Insurance Company of China, Ltd., Carestream Health, Inc. and NVIDIA Corporation.

Figures

Figure 1
Figure 1
CNN with two convolution layers each followed by a pooling layer and one fully connected layer.
Figure 2
Figure 2
Number of peer‐reviewed publications in radiologic medical imaging that involved DL. Peer‐reviewed publications were searched on PubMed using the criteria (“deep learning” OR “deep neural network” OR deep convolution OR deep convolutional OR convolution neural network OR “shift‐invariant artificial neural network” OR MTANN) AND (radiography OR x‐ray OR mammography OR CT OR MRI OR PET OR ultrasound OR therapy OR radiology OR MR OR mammogram OR SPECT). The search only covered the first 3 months of 2018 and the result was linearly extended to the rest of 2018.
Figure 3
Figure 3
Use of CNN as a feature extractor. (a) Each ROI is sent through AlexNet and the outputs from each layer are preprocessed to be used as sets of features for an SVM. The filtered image outputs from some of the layers can be seen in the left column. The numbers in parentheses for the center column denote the dimensionality of the outputs from each layer. The numbers in parentheses for the right column denote the length of the feature vector per ROI used as an input for the SVM after zero‐variance removal. (b) Performance in terms of area under the receiver operating characteristic curve for classifiers based on features from each layer of AlexNet in the task of distinguishing between malignant and benign breast tumors.
Figure 4
Figure 4
CNN‐extracted and conventional features can be combined in a number of ways, including a traditional classifier such as an SVM.
Figure 5
Figure 5
The use of training, validation, and test sets for the design and performance evaluation of a supervised machine learning algorithm.
Figure 6
Figure 6
A disease image categorization framework using both images and texts.
Figure 7
Figure 7
Eight sample disease keywords and images mined from PACS.

Similar articles

Cited by

References

    1. Amodei D, Ananthanarayanan S, Anubhai R, et al. Deep speech 2: end‐to‐end speech recognition in English and Mandarin. In: International Conference on Machine Learning; 2016:173–182.
    1. Peters ME, Neumann M, Iyyer M, et al. Deep contextualized word representations; 2018. arXiv preprint arXiv:1802.05365.
    1. Zoph B, Vasudevan V, Shlens J, Le QV. Learning transferable architectures for scalable image recognition, 2; 2017. arXiv preprint arXiv:1707.07012.
    1. Silver D, Huang A, Maddison CJ, et al. Mastering the game of go with deep neural networks and tree search. Nature. 2016;529:484–489. - PubMed
    1. Gandhi D, Pinto L, Gupta A. Learning to fly by crashing; 2017. arXiv:1704.05588.