Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Dec 12;10(12):2038-2057.
doi: 10.3390/tomography10120145.

CNN-Based Cross-Modality Fusion for Enhanced Breast Cancer Detection Using Mammography and Ultrasound

Affiliations

CNN-Based Cross-Modality Fusion for Enhanced Breast Cancer Detection Using Mammography and Ultrasound

Yi-Ming Wang et al. Tomography. .

Abstract

Background/Objectives: Breast cancer is a leading cause of mortality among women in Taiwan and globally. Non-invasive imaging methods, such as mammography and ultrasound, are critical for early detection, yet standalone modalities have limitations in regard to their diagnostic accuracy. This study aims to enhance breast cancer detection through a cross-modality fusion approach combining mammography and ultrasound imaging, using advanced convolutional neural network (CNN) architectures. Materials and Methods: Breast images were sourced from public datasets, including the RSNA, the PAS, and Kaggle, and categorized into malignant and benign groups. Data augmentation techniques were used to address imbalances in the ultrasound dataset. Three models were developed: (1) pre-trained CNNs integrated with machine learning classifiers, (2) transfer learning-based CNNs, and (3) a custom-designed 17-layer CNN for direct classification. The performance of the models was evaluated using metrics such as accuracy and the Kappa score. Results: The custom 17-layer CNN outperformed the other models, achieving an accuracy of 0.964 and a Kappa score of 0.927. The transfer learning model achieved moderate performance (accuracy 0.846, Kappa 0.694), while the pre-trained CNNs with machine learning classifiers yielded the lowest results (accuracy 0.780, Kappa 0.559). Cross-modality fusion proved effective in leveraging the complementary strengths of mammography and ultrasound imaging. Conclusions: This study demonstrates the potential of cross-modality imaging and tailored CNN architectures to significantly improve diagnostic accuracy and reliability in breast cancer detection. The custom-designed model offers a practical solution for early detection, potentially reducing false positives and false negatives, and improving patient outcomes through timely and accurate diagnosis.

Keywords: artificial intelligence; breast cancer; convolutional neural networks; deep learning algorithms.

PubMed Disclaimer

Conflict of interest statement

The authors declare that there are no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Figures

Figure 1
Figure 1
Workflow of modality fusion and data augmentation for breast cancer detection.
Figure 2
Figure 2
Representative examples of breast imaging modalities used in the study. The mammography (RSNA) for benign (AC) and malignant (DF) cases, sonography-1 (Kaggle) for benign (GI) and malignant (JL) cases, and sonography-2 (PAS) for benign (MO) and malignant (PR) cases.
Figure 3
Figure 3
The workflow of Model 1.
Figure 4
Figure 4
The workflow of Model 2.
Figure 5
Figure 5
The workflow of Model 3.
Figure 6
Figure 6
ROC curves for Model 1, with maximum accuracy across classifiers (SVM, LR, and NB) for each pre-trained CNN.
Figure 7
Figure 7
ROC curves for Model 2, with maximum accuracy across optimizers (ADAM, RMSprop, and SGDM) for each transfer learning CNN.
Figure 8
Figure 8
ROC curves for Model 3 (custom CNN), with optimal accuracy based on epochs and batch sizes.
Figure 9
Figure 9
Comparison of breast tissue textures in sonography and mammography images.
Figure 10
Figure 10
Examples of varying resolutions in breast sonography images.
Figure 11
Figure 11
Examples of varying contrast levels in mammography images.

Similar articles

Cited by

References

    1. Siegel R.L., Miller K.D., Fuchs H.E., Jemal A. Cancer statistics, 2022. CA Cancer J. Clin. 2022;72:7–33. doi: 10.3322/caac.21708. - DOI - PubMed
    1. Zhang G., Lei Y.M., Li N., Yu J., Jiang X.Y., Yu M.H., Hu H.M., Zeng S.E., Cui X.W., Ye H.R. Ultrasound super-resolution imaging for differential diagnosis of breast masses. Front. Oncol. 2022;12:1049991. doi: 10.3389/fonc.2022.1049991. - DOI - PMC - PubMed
    1. Zhi W., Miao A., You C., Zhou J., Zhang H., Zhu X., Wang Y., Chang C. Differential diagnosis of B-mode ultrasound Breast Imaging Reporting and Data System category 3-4a lesions in conjunction with shear-wave elastography using conservative and aggressive approaches. Quant. Imaging Med. Surg. 2022;12:3833–3843. doi: 10.21037/qims-21-916. - DOI - PMC - PubMed
    1. Lotter W., Diab A.R., Haslam B., Kim J.G., Grisot G., Wu E., Wu K., Onieva J.O., Boyer Y., Boxerman J.L., et al. Robust breast cancer detection in mammography and digital breast tomosynthesis using an annotation-efficient deep learning approach. Nat. Med. 2021;27:244–249. doi: 10.1038/s41591-020-01174-9. - DOI - PMC - PubMed
    1. Tsai K.J., Chou M.C., Li H.M., Liu S.T., Hsu J.H., Yeh W.C., Hung C.M., Yeh C.Y., Hwang S.H. A High-Performance Deep Neural Network Model for BI-RADS Classification of Screening Mammography. Sensors. 2022;22:1160. doi: 10.3390/s22031160. - DOI - PMC - PubMed

Publication types

LinkOut - more resources