Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Nov 4;8(1):38.
doi: 10.1007/s13755-020-00131-7. eCollection 2020 Dec.

Fusion of whole and part features for the classification of histopathological image of breast tissue

Affiliations

Fusion of whole and part features for the classification of histopathological image of breast tissue

Chiranjibi Sitaula et al. Health Inf Sci Syst. .

Abstract

Purpose: Nowadays Computer-Aided Diagnosis (CAD) models, particularly those based on deep learning, have been widely used to analyze histopathological images in breast cancer diagnosis. However, due to the limited availability of such images, it is always tedious to train deep learning models that require a huge amount of training data. In this paper, we propose a new deep learning-based CAD framework that can work with less amount of training data.

Methods: We use pre-trained models to extract image features that can then be used with any classifier. Our proposed features are extracted by the fusion of two different types of features (foreground and background) at two levels (whole-level and part-level). Foreground and background feature to capture information about different structures and their layout in microscopic images of breast tissues. Similarly, part-level and whole-level features capture are useful in detecting interesting regions scattered in high-resolution histopathological images at local and whole image levels. At each level, we use VGG16 models pre-trained on ImageNet and Places datasets to extract foreground and background features, respectively. All features are extracted from mid-level pooling layers of such models.

Results: We show that proposed fused features with a Support Vector Classifier (SVM) produce better classification accuracy than recent methods on BACH dataset and our approach is orders of magnitude faster than the best performing recent method (EMS-Net).

Conclusion: We believe that our method would be another alternative in the diagnosis of breast cancer because of performance and prediction time.

Keywords: Breast cancer; Computer-aided diagnosis; Deep learning; Histology; Histopathological images; Image classification.

PubMed Disclaimer

Conflict of interest statement

Conflict of interestWe would like to confirm that there are no known conflict of interests exist.

Figures

Fig. 1
Fig. 1
Grad-CAM visualization of five pooling layers of VGG16 for the input image at whole-level and part-level using both foreground and background features, where p1 to p5 represent the five pooling layers of VGG16 models
Fig. 2
Fig. 2
Sampled example H&E images from four classes of the BACH dataset: a normal, b benign, c in situ, and d invasive
Fig. 3
Fig. 3
Diagram showing the features map of the corresponding input image at whole-level and part-level using both foreground and background features, where p1 to p5 represent the five pooling layers of VGG16 models
Fig. 4
Fig. 4
Block diagram of the proposed method, where FWj(I) and FPj(I) represents the foreground features extracted from jth pooling layers (j{3,4}) at whole-level and part-level, respectively. Similarly, BWj(I) and BPj(I) represents the background features extracted from the jth pooling layer at whole-level and part-level, respectively
Fig. 5
Fig. 5
Patch-level features extraction for the proposed method. Note that the diagram utlizes the jth pooling layer to extract the aggregated foreground ((FPj(I)) and background features (BPj(I)) using both foreground (F) and background (B) information
Fig. 6
Fig. 6
Confusion matrix of our method on the testing split of a Set 1, b Set 2, c Set 3, d Set 4, and e Set 5

References

    1. Araújo T, Aresta G, Castro E, Rouco J, Aguiar P, Eloy C, Polónia A, Campilho A. Classification of breast cancer histology images using convolutional neural networks. PLoS ONE. 2017;12(6):e0177544. doi: 10.1371/journal.pone.0177544. - DOI - PMC - PubMed
    1. Aresta G, Araújo T, Kwok S, Chennamsetty SS, Safwan M, Alex V, Marami B, Prastawa M, Chan M, Donovan M, et al. Bach: Grand challenge on breast cancer histology images. Med Image Anal. 2019;56:122–139. doi: 10.1016/j.media.2019.05.010. - DOI - PubMed
    1. Barker J, Hoogi A, Depeursinge A, Rubin DL. Automated classification of brain tumor type in whole-slide digital pathology images using local representative tiles. Med Image Anal. 2016;30:60–71. doi: 10.1016/j.media.2015.12.002. - DOI - PubMed
    1. Breiman L. Random forests. Mach Learn. 2001;45(1):5–32. doi: 10.1023/A:1010933404324. - DOI
    1. Campanella G, Silva VWK, Fuchs TJ. Terabyte-scale deep multiple instance learning for classification and localization in pathology. arXiv preprint. arXiv:180506983 (2018).

LinkOut - more resources