Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comparative Study
. 2019 Oct;29(10):5458-5468.
doi: 10.1007/s00330-019-06118-7. Epub 2019 Mar 29.

Automatic classification of ultrasound breast lesions using a deep convolutional neural network mimicking human decision-making

Affiliations
Comparative Study

Automatic classification of ultrasound breast lesions using a deep convolutional neural network mimicking human decision-making

Alexander Ciritsis et al. Eur Radiol. 2019 Oct.

Abstract

Objectives: To evaluate a deep convolutional neural network (dCNN) for detection, highlighting, and classification of ultrasound (US) breast lesions mimicking human decision-making according to the Breast Imaging Reporting and Data System (BI-RADS).

Methods and materials: One thousand nineteen breast ultrasound images from 582 patients (age 56.3 ± 11.5 years) were linked to the corresponding radiological report. Lesions were categorized into the following classes: no tissue, normal breast tissue, BI-RADS 2 (cysts, lymph nodes), BI-RADS 3 (non-cystic mass), and BI-RADS 4-5 (suspicious). To test the accuracy of the dCNN, one internal dataset (101 images) and one external test dataset (43 images) were evaluated by the dCNN and two independent readers. Radiological reports, histopathological results, and follow-up examinations served as reference. The performances of the dCNN and the humans were quantified in terms of classification accuracies and receiver operating characteristic (ROC) curves.

Results: In the internal test dataset, the classification accuracy of the dCNN differentiating BI-RADS 2 from BI-RADS 3-5 lesions was 87.1% (external 93.0%) compared with that of human readers with 79.2 ± 1.9% (external 95.3 ± 2.3%). For the classification of BI-RADS 2-3 versus BI-RADS 4-5, the dCNN reached a classification accuracy of 93.1% (external 95.3%), whereas the classification accuracy of humans yielded 91.6 ± 5.4% (external 94.1 ± 1.2%). The AUC on the internal dataset was 83.8 (external 96.7) for the dCNN and 84.6 ± 2.3 (external 90.9 ± 2.9) for the humans.

Conclusion: dCNNs may be used to mimic human decision-making in the evaluation of single US images of breast lesion according to the BI-RADS catalog. The technique reaches high accuracies and may serve for standardization of highly observer-dependent US assessment.

Key points: • Deep convolutional neural networks could be used to classify US breast lesions. • The implemented dCNN with its sliding window approach reaches high accuracies in the classification of US breast lesions. • Deep convolutional neural networks may serve for standardization in US BI-RADS classification.

Keywords: Artificial intelligence; Breast; Machine learning; Ultrasound.

PubMed Disclaimer

References

    1. Br J Radiol. 2018 Oct;91(1090):20170441 - PubMed
    1. Breast Cancer Res. 2011;13(6):223 - PubMed
    1. Epidemiology. 1991 Nov;2(6):440-4 - PubMed
    1. IEEE Trans Med Imaging. 2013 Dec;32(12):2262-73 - PubMed
    1. J Digit Imaging. 2013 Dec;26(6):1091-8 - PubMed

Publication types

LinkOut - more resources