Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Mar:88:101866.
doi: 10.1016/j.compmedimag.2021.101866. Epub 2021 Jan 12.

Deep Multi-Magnification Networks for multi-class breast cancer image segmentation

Affiliations

Deep Multi-Magnification Networks for multi-class breast cancer image segmentation

David Joon Ho et al. Comput Med Imaging Graph. 2021 Mar.

Abstract

Pathologic analysis of surgical excision specimens for breast carcinoma is important to evaluate the completeness of surgical excision and has implications for future treatment. This analysis is performed manually by pathologists reviewing histologic slides prepared from formalin-fixed tissue. In this paper, we present Deep Multi-Magnification Network trained by partial annotation for automated multi-class tissue segmentation by a set of patches from multiple magnifications in digitized whole slide images. Our proposed architecture with multi-encoder, multi-decoder, and multi-concatenation outperforms other single and multi-magnification-based architectures by achieving the highest mean intersection-over-union, and can be used to facilitate pathologists' assessments of breast cancer.

Keywords: Breast cancer; Computational pathology; Deep Multi-Magnification Network; Multi-class image segmentation; Partial annotation.

PubMed Disclaimer

Conflict of interest statement

Conflict of interest

T.J.F. is the Chief Scientific Officer, co-founders and equity holders of Paige.AI. M.G.H. is a consultant for Paige.AI and on the medical advisory board of Path-Presenter. D.J.H. and T.J.F. have intellectual property interests relevant to the work that is the subject of this paper. MSK has financial interests in Paige.AI. and intellectual property interests relevant to the work that is the subject of this paper.

Figures

Figure 1:
Figure 1:
Introduction of a Deep Single-Magnification Network (DSMN) and a Deep Multi-Magnification Network (DMMN) for tissue segmentation of whole slide images. (a) A DSMN looks at a patch from a single magnification from a whole slide image with limited field-of-view to generate the corresponding multi-class tissue segmentation prediction. (b) A DMMN looks at a set of patches from multiple magnifications from a whole slide image to have wider field-of-view to generate the corresponding multi-class tissue segmentation prediction. The DMMN can learn both cellular features from a higher magnification and architectural growth patterns from a lower magnification. Here, carcinoma is predicted in red, benign epithelial in blue, background in yellow, stroma in green, necrotic in gray, and adipose in orange.
Figure 2:
Figure 2:
Block diagram of the proposed method with our Deep Multi-Magnification Network. The first step of our method is to partially annotate training whole slide images. After extracting training patches from the partial annotations and balancing the number of pixels between classes, our Deep Multi-Magnification Network is trained. The trained network is used for multi-class tissue segmentation of whole slide images.
Figure 3:
Figure 3:
An example of partial annotation. (a) A whole slide image from breast tissue. (b) A partially annotated image where multiple tissue subtypes are annotated in distinct colors and white regions are unlabeled. (c) The partial annotation overlaid on the whole slide image. Subtype components are annotated without cropping while reducing the thickness of unlabeled regions between the subtype components. Here, carcinoma is annotated in red, benign epithelial in blue, background in yellow, stroma in green, necrotic in gray, and adipose in orange.
Figure 4:
Figure 4:
CNN architectures for multi-class tissue segmentation of a Deep Single-Magnification Network (DSMN) in (a) utilizing a patch from a single magnification and Deep Multi-Magnification Networks (DMMNs) in (b-e) utilizing multiple patches in various magnifications. (a) U-Net [17] is used as our DSMN architecture. (b) Single-Encoder Single-Decoder (DMMN-S2) is a DMMN architecture where multiple patches are concatenated and used as an input to the U-Net architecture. (c) Multi-Encoder Single-Decoder (DMMN-MS) is a DMMN architecture having only one decoder. (d) Multi-Encoder Multi-Decoder Single-Concatenation (DMMN-M2S) is a DMMN architecture where feature maps from multiple magnifications are only concatenated at the final layer. (e) Our proposed Multi-Encoder Multi-Decoder Multi-Concatenation (DMMN-M3) is a DMMN architecture where feature maps are concatenated during intermediate layers to enrich feature maps in the decoder of the highest magnification.
Figure 5:
Figure 5:
Class balancing using elastic deformation in the training breast dataset.
Figure 6:
Figure 6:
Multi-class tissue segmentation predictions of a whole slide image from Dataset-I using two Deep Single-Magnification Networks (DSMNs), SegNet [16] and U-Net [17], and four Deep Multi-Magnification Networks (DMMNs), Single-Encoder Single-Decoder (DMMN-S2), Multi-Encoder Single-Decoder (DMMN-MS), Multi-Encoder Multi-Decoder Single-Concatenation (DMMN-M2S), and our proposed Multi-Encoder Multi-Decoder Multi-Concatenation (DMMN-M3).
Figure 7:
Figure 7:
Multi-class tissue segmentation predictions of invasive ductal carcinoma (IDC) in red from Dataset-I using two Deep Single-Magnification Networks (DSMNs), SegNet [16] and U-Net [17], and four Deep Multi-Magnification Networks (DMMNs), Single-Encoder Single-Decoder (DMMN-S2), Multi-Encoder Single-Decoder (DMMN-MS), Multi-Encoder Multi-Decoder Single-Concatenation (DMMN-M2S), and our proposed Multi-Encoder Multi-Decoder Multi-Concatenation (DMMN-M3).
Figure 8:
Figure 8:
Multi-class tissue segmentation predictions of a whole slide image from Dataset-I using two Deep Single-Magnification Networks (DSMNs), SegNet [16] and U-Net [17], and four Deep Multi-Magnification Networks (DMMNs), Single-Encoder Single-Decoder (DMMN-S2), Multi-Encoder Single-Decoder (DMMN-MS), Multi-Encoder Multi-Decoder Single-Concatenation (DMMN-M2S), and our proposed Multi-Encoder Multi-Decoder Multi-Concatenation (DMMN-M3).
Figure 9:
Figure 9:
Multi-class tissue segmentation predictions of benign epithelial in blue from Dataset-I using two Deep Single-Magnification Networks (DSMNs), SegNet [16] and U-Net [17], and four Deep Multi-Magnification Networks (DMMNs), Single-Encoder Single-Decoder (DMMN-S2), Multi-Encoder Single-Decoder (DMMN-MS), Multi-Encoder Multi-Decoder Single-Concatenation (DMMN-M2S), and our proposed Multi-Encoder Multi-Decoder Multi-Concatenation (DMMN-M3).
Figure 10:
Figure 10:
Multi-class tissue segmentation predictions of a whole slide image from Dataset-II using two Deep Single-Magnification Networks (DSMNs), SegNet [16] and U-Net [17], and four Deep Multi-Magnification Networks (DMMNs), Single-Encoder Single-Decoder (DMMN-S2), Multi-Encoder Single-Decoder (DMMN-MS), Multi-Encoder Multi-Decoder Single-Concatenation (DMMN-M2S), and our proposed Multi-Encoder Multi-Decoder Multi-Concatenation (DMMN-M3).
Figure 11:
Figure 11:
Multi-class tissue segmentation predictions of ductal carcinoma in situ (DCIS) in red from Dataset-II using two Deep Single-Magnification Networks (DSMNs), SegNet [16] and U-Net [17], and four Deep Multi-Magnification Networks (DMMNs), Single-Encoder Single-Decoder (DMMN-S2), Multi-Encoder Single-Decoder (DMMN-MS), Multi-Encoder Multi-Decoder Single-Concatenation (DMMN-M2S), and our proposed Multi-Encoder Multi-Decoder Multi-Concatenation (DMMN-M3).
Figure 12:
Figure 12:
Confusion matrices evaluating carcinoma, benign epithelial, stroma, necrotic, adipose, and background segmentation on Dataset-I based on two Deep Single-Magnification Networks (DSMNs), SegNet [16] and U-Net [17], and four Deep Multi-Magnification Networks (DMMNs), Single-Encoder Single-Decoder (DMMN-S2), Multi-Encoder Single-Decoder (DMMN-MS), Multi-Encoder Multi-Decoder Single-Concatenation (DMMN-M2S), and our proposed Multi-Encoder Multi-Decoder Multi-Concatenation (DMMN-M3).
Figure 13:
Figure 13:
Confusion matrices evaluating carcinoma, benign epithelial, and stroma segmentation on Dataset-II based on two Deep Single-Magnification Networks (DSMNs), SegNet [16] and U-Net [17], and four Deep Multi-Magnification Networks (DMMNs), Single-Encoder Single-Decoder (DMMN-S2), Multi-Encoder Single-Decoder (DMMN-MS), Multi-Encoder Multi-Decoder Single-Concatenation (DMMN-M2S), and our proposed Multi-Encoder Multi-Decoder Multi-Concatenation (DMMN-M3). Necrotic, adipose, and background are excluded from the confusion matrices on Dataset-II due to the lack of pixels being evaluated.
Figure 14:
Figure 14:
Multi-class tissue segmentation predictions of well-differentiated carcinomas in red from Dataset-II using our proposed Multi-Encoder Multi-Decoder Multi-Concatenation (DMMN-M3).

References

    1. Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A, Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries, CA: A Cancer Journal for Clinicians 68 (6) (2018) 394–424. - PubMed
    1. DeSantis CE, Ma J, Gaudet MM, Newman LA, Miller KD, Sauer AG, Jemal A, Siegel RL, Breast cancer statistics, 2019, CA: A Cancer Journal for Clinicians 69 (6) (2019) 438–451. - PubMed
    1. Moo T-A, Choi L, Culpepper C, Olcese C, Heerdt A, Sclafani L, King TA, Reiner AS, Patil S, Brogi E, Morrow M, Zee KJV, Impact of margin assessment method on positive margin rate and total volume excised, Annals of Surgical Oncology 21 (1) (2014) 86–92. - PMC - PubMed
    1. Gage I, Schnitt SJ, Nixon AJ, Silver B, Recht A, Troyan SL, Eberlein T, Love SM, Gelman R, Harris JR, Connolly JL, Pathologic margin involvement and the risk of recurrence in patients treated with breast-conserving therapy, Cancer 78 (9) (1996) 1921–1928. - PubMed
    1. Fuchs TJ, Buhmann JM, Computational pathology: Challenges and promises for tissue analysis, Computerized Medical Imaging and Graphics 35 (7) (2011) 515–530. - PubMed

Publication types