Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 Oct;44(10):5162-5171.
doi: 10.1002/mp.12453. Epub 2017 Aug 12.

A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets

Affiliations

A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets

Natalia Antropova et al. Med Phys. 2017 Oct.

Abstract

Background: Deep learning methods for radiomics/computer-aided diagnosis (CADx) are often prohibited by small datasets, long computation time, and the need for extensive image preprocessing.

Aims: We aim to develop a breast CADx methodology that addresses the aforementioned issues by exploiting the efficiency of pre-trained convolutional neural networks (CNNs) and using pre-existing handcrafted CADx features.

Materials & methods: We present a methodology that extracts and pools low- to mid-level features using a pretrained CNN and fuses them with handcrafted radiomic features computed using conventional CADx methods. Our methodology is tested on three different clinical imaging modalities (dynamic contrast enhanced-MRI [690 cases], full-field digital mammography [245 cases], and ultrasound [1125 cases]).

Results: From ROC analysis, our fusion-based method demonstrates, on all three imaging modalities, statistically significant improvements in terms of AUC as compared to previous breast cancer CADx methods in the task of distinguishing between malignant and benign lesions. (DCE-MRI [AUC = 0.89 (se = 0.01)], FFDM [AUC = 0.86 (se = 0.01)], and ultrasound [AUC = 0.90 (se = 0.01)]).

Discussion/conclusion: We proposed a novel breast CADx methodology that can be used to more effectively characterize breast lesions in comparison to existing methods. Furthermore, our proposed methodology is computationally efficient and circumvents the need for image preprocessing.

Keywords: breast cancer; deep learning; feature extraction.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Lesion classification pipeline based on diagnostic images. Two types of features are extracted from a medical image: (a) CNN features with pretrained CNN and (b) handcrafted features with conventional CADx. High and low‐level features extracted by pretrained CNN are evaluated in terms of their classification performance and preprocessing requirements. Furthermore, the classifier outputs from the pooled CNN features and the handcrafted features are fused in the evaluation of a combination of the two types of features.
Figure 2
Figure 2
Architecture of VGG19 model. It takes in an image ROI as an input. The model comprises five blocks, each of which contains two or four convolutional layers and a max‐pooling layer. The five blocks are followed by three fully connected layers. Features are extracted from the five max‐pooling layers, average‐pooled across the channel (third) dimension, and normalized with L2 norm. The normalized features are concatenated to form the CNN feature vector. [Color figure can be viewed at wileyonlinelibrary.com]
Figure 3
Figure 3
(a) Examples of DCEMRI transverse center slices with the corresponding ROIs extracted. On the left is a benign case and on the right is a malignant case. (b) ROIs, extracted from the precontrast time‐point (t0) and the first two postcontrast time‐points (t1, t2), input into the three color channels of VGG19. [Color figure can be viewed at wileyonlinelibrary.com]
Figure 4
Figure 4
Fitted binormal ROC curves comparing the predictive performance of different CNN‐based classifiers. Note that since FFDM ROIs were presented in uniform dimensions, there was no preprocessing done for that dataset. [Color figure can be viewed at wileyonlinelibrary.com]
Figure 5
Figure 5
AUC values for the benign vs. malignant lesion discrimination tasks for the CNN‐based, CADx‐bases, and fusion classifiers. P‐values were corrected for multiple comparisons with Bonferroni‐Holm corrections.
Figure 6
Figure 6
Fitted binormal ROC curves comparing the performances of CNN‐based classifiers, CADx‐based classifiers, and fusion classifiers. The solid line represents the fusion classifier. The dotted line represents the CNN‐based classifier using pooled features. The dashed line represents the conventional CADx classifier using handcrafted features. [Color figure can be viewed at wileyonlinelibrary.com]
Figure 7
Figure 7
Bland‐Altman plots for each of the imaging modalities. The figures illustrate classifier agreement between the CNN‐based classifier and the CADx‐based classifier. The y‐axis shows the difference between the SVM outputs of the two classifiers; the x‐axis shows the averaged output of the two classifiers. Since the averaged output is also the output of the fusion classifier, these plots also help visualize potential decision boundaries between benign and malignant classifications. [Color figure can be viewed at wileyonlinelibrary.com]
Figure 8
Figure 8
A diagonal classifier agreement plot between the CNN‐based classifier and the conventional CADx classifier for FFDM. The x‐axis denotes the output from the CNN‐based classifier, and the y‐axis denotes the output from the conventional CADx classifier. Each point represents an ROI for which predictions were made. Points near or along the diagonal from bottom left to top right indicate high classifier agreement; points far from the diagonal indicate low agreement. ROI pictures of extreme examples of agreement/disagreement are included. [Color figure can be viewed at wileyonlinelibrary.com]

Similar articles

Cited by

References

    1. Giger ML, Karssemeijer N, Schnabel JA. Breast image analysis for risk assessment, detection, diagnosis, and treatment of cancer. Annu Rev Biomed Eng. 2013;15:327–357. - PubMed
    1. Zhang W, Doi K, Giger ML, Wu Y, Nishikawa RM, Schmidt R. A Computerized detection of clustered microcalcifications in digital mammograms using a shift‐invariant artificial neural network. Med Phys. 1994;21:517–524. - PubMed
    1. Lo SCB, Chan HP, Lin JS, Li H, Freedman MT, Mun SK. Artificial convolution neural network for medical image pattern recognition. Neural Netw. 1995;8:1201–1214.
    1. Yosinski J, Clune J, Bengio Y, Lipson H. How transferable are features in deep neural networks? Adv Neural Inf Process Syst. 2014;27:3320–3328.
    1. Donahue J, Jia Y, Vinyals O, et al. DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition. in Proceedings of The 31st International Conference on Machine Learning 647–655; 2014.

MeSH terms