Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 Mar;15(3 Pt B):521-526.
doi: 10.1016/j.jacr.2017.12.027. Epub 2018 Jan 31.

Deep Learning in Radiology: Does One Size Fit All?

Affiliations

Deep Learning in Radiology: Does One Size Fit All?

Bradley J Erickson et al. J Am Coll Radiol. 2018 Mar.

Abstract

Deep learning (DL) is a popular method that is used to perform many important tasks in radiology and medical imaging. Some forms of DL are able to accurately segment organs (essentially, trace the boundaries, enabling volume measurements or calculation of other properties). Other DL networks are able to predict important properties from regions of an image-for instance, whether something is malignant, molecular markers for tissue in a region, even prognostic markers. DL is easier to train than traditional machine learning methods, but requires more data and much more care in analyzing results. It will automatically find the features of importance, but understanding what those features are can be a challenge. This article describes the basic concepts of DL systems and some of the traps that exist in building DL systems and how to identify those traps.

Keywords: Deep learning; computer-aided diagnosis; machine learning.

PubMed Disclaimer

Conflict of interest statement

The authors have no conflicts of interest related to the material discussed in this article.

Figures

Fig 1
Fig 1
Example of three activation functions used in neural networks: (a) rectified linear unit (ReLU), (b) leaky ReLU, (c) sigmoid, and (d) Tanh. Traditional neural networks used sigmoidal functions that simulated actual neurons, but are less effective in current networks, likely because they do not adequately reward very strong activations.
Fig 2
Fig 2
Architecture of two popular networks: (a) AlexNet and (b) VGGNet. Input, input image; Conv, convolutional layer; Pool, maximum value pooling layer; Full Conn, fully connected layer; SoftMax, softmax function, also known as normalized exponential function that takes an input vector and maps it to the range of (0,1). This is the class probability. If there are 1,000 outputs in this layer, each value in the 1,000-element vector would correspond with the probability of the input image being that class. The highest value(s) are the predicted class(es).

References

    1. Srivastava N, Hinton GR, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014;15:1929–58.
    1. Shang W, Sohn K, Almeida D, Lee H. Understanding and improving convolutional neural networks via concatenated rectified linear units. [Accessed Aug 2, 2017];arXiv [cs.LG] 2016 Available at: https://arxiv.org/abs/1603.05201.
    1. Glorot X, Bordes A, Bengio Y. Deep sparse rectifier neural networks. 2011:275.
    1. Clevert D-A, Unterthiner T, Hochreiter S. Fast and accurate deep network learning by exponential linear units (ELUs) [Accessed Aug 2, 2017];arXiv [cs.LG] 2015 Available at: https://arxiv.org/abs/1511.07289.
    1. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. [Accessed Aug 29, 2017];arXiv [cs.CV] 2014 Available at: https://arxiv.org/abs/1409.1556.