Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Mar 11;13(6):1067.
doi: 10.3390/diagnostics13061067.

Classification of Breast Lesions on DCE-MRI Data Using a Fine-Tuned MobileNet

Affiliations

Classification of Breast Lesions on DCE-MRI Data Using a Fine-Tuned MobileNet

Long Wang et al. Diagnostics (Basel). .

Abstract

It is crucial to diagnose breast cancer early and accurately to optimize treatment. Presently, most deep learning models used for breast cancer detection cannot be used on mobile phones or low-power devices. This study intended to evaluate the capabilities of MobileNetV1 and MobileNetV2 and their fine-tuned models to differentiate malignant lesions from benign lesions in breast dynamic contrast-enhanced magnetic resonance images (DCE-MRI).

Keywords: breast lesions; deep learning; magnetic resonance imaging; mobile convolutional neural networks.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
DTL diagram. The output of the DTL model is the likelihood of malignancy. The data analysis process is divided into three parts: the first part is image network feature extraction, the second part includes data training and testing, and the third part is the validation of the DTL model.
Figure 2
Figure 2
Schematic diagrams of the fine-tuning strategies for MobileNetV1 and MobileNetV2. Abbreviations used in the figure: S = strategy. Train: activated layers of the neural network; frozen: nontrainable layers of the neural network; conv: convolutional layer.
Figure 3
Figure 3
Hyperparameter settings. Abbreviations used in the figure: avg: average, max: maximum, pl: pooling, lr: learning rate, Adam: adaptive moment estimation, SGD: stochastic gradient descent.
Figure 4
Figure 4
Learning curves for all models with respect to the number of epochs. (a) training accuracy of the proposed models, (b) test accuracy of the proposed models, (c) training loss of the proposed models, (d) test loss of the proposed models. The results clearly show that the V1_False model attained the highest accuracy (b) and lowest loss value (d) on the test set.
Figure 5
Figure 5
Results of model training. Abbreviations used in the figure: Ac1: accuracy on the train set, Ac2: accuracy on the test set, loss: loss on the test set.
Figure 6
Figure 6
Heatmaps of V1_True. Heatmaps of activated zone boundaries in benign lesions (ac) and malignant lesions (df). The heatmap of the activated zone for the malignant lesions showed greater activation than that of the benign lesions. (a,d) original images, (b,e) heatmaps of (a,d), respectively, (c) fusion image of (a,b), (f) fusion image of (d,e).
Figure 7
Figure 7
Schematic of the fivefold cross-validation. Train = train set, test = test set, validation = validation set.
Figure 8
Figure 8
AUC analyses of the four proposed models (V1_False, V1_True, V2_False, V2_True) based on breast DCE-MRI. It was observed that the V1_True model (with AUC = 0.74) performed significantly better than the other three models on the validation set.
Figure 9
Figure 9
Confusion matrices of V1_False (a), V1_True (b), V2_False (c) and V2_True (d) on the validation set. Intuitively, we can see the different numbers of correctly versus erroneously predicted images for benign and malignant lesions in the validation set. B represents benign, and M represents malignant.

References

    1. Sung H., Ferlay J., Siegel R.L., Laversanne M., Soerjomataram I., Jemal A., Bray F. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J. Clin. 2021;71:209–249. doi: 10.3322/caac.21660. - DOI - PubMed
    1. Fujioka T., Mori M., Kubota K., Oyama J., Yamaga E., Yashima Y., Katsuta L., Nomura K., Nara M., Oda G., et al. The Utility of Deep Learning in Breast Ultrasonic Imaging: A Review. Diagnostics. 2020;10:1055. doi: 10.3390/diagnostics10121055. - DOI - PMC - PubMed
    1. Shah S.M., Khan R.A., Arif S., Sajid U. Artificial intelligence for breast cancer analysis: Trends & directions. Comput. Biol. Med. 2022;142:105221. - PubMed
    1. Zerouaoui H., Idri A. Reviewing Machine Learning and Image Processing Based Decision-Making Systems for Breast Cancer Imaging. J. Med. Syst. 2021;45:8. doi: 10.1007/s10916-020-01689-1. - DOI - PubMed
    1. Gubern-Mérida A., Martí R., Melendez J., Hauth J.L., Mann R.M., Karssemeijer N., Platel B. Automated localization of breast cancer in DCE-MRI. Med. Image Anal. 2015;20:265–274. doi: 10.1016/j.media.2014.12.001. - DOI - PubMed

LinkOut - more resources