Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Dec 5;14(1):18-36.
doi: 10.1364/BOE.463839. eCollection 2023 Jan 1.

Super-resolution and segmentation deep learning for breast cancer histopathology image analysis

Affiliations

Super-resolution and segmentation deep learning for breast cancer histopathology image analysis

Aniwat Juhong et al. Biomed Opt Express. .

Abstract

Traditionally, a high-performance microscope with a large numerical aperture is required to acquire high-resolution images. However, the images' size is typically tremendous. Therefore, they are not conveniently managed and transferred across a computer network or stored in a limited computer storage system. As a result, image compression is commonly used to reduce image size resulting in poor image resolution. Here, we demonstrate custom convolution neural networks (CNNs) for both super-resolution image enhancement from low-resolution images and characterization of both cells and nuclei from hematoxylin and eosin (H&E) stained breast cancer histopathological images by using a combination of generator and discriminator networks so-called super-resolution generative adversarial network-based on aggregated residual transformation (SRGAN-ResNeXt) to facilitate cancer diagnosis in low resource settings. The results provide high enhancement in image quality where the peak signal-to-noise ratio and structural similarity of our network results are over 30 dB and 0.93, respectively. The derived performance is superior to the results obtained from both the bicubic interpolation and the well-known SRGAN deep-learning methods. In addition, another custom CNN is used to perform image segmentation from the generated high-resolution breast cancer images derived with our model with an average Intersection over Union of 0.869 and an average dice similarity coefficient of 0.893 for the H&E image segmentation results. Finally, we propose the jointly trained SRGAN-ResNeXt and Inception U-net Models, which applied the weights from the individually trained SRGAN-ResNeXt and inception U-net models as the pre-trained weights for transfer learning. The jointly trained model's results are progressively improved and promising. We anticipate these custom CNNs can help resolve the inaccessibility of advanced microscopes or whole slide imaging (WSI) systems to acquire high-resolution images from low-performance microscopes located in remote-constraint settings.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflicts of interest related to this article.

Figures

Fig. 1.
Fig. 1.
The workflow of super high resolution and segmentation deep learning. (a) Fresh breast tumor tissues. (b) The corresponding H&E stained tissue slides. (c) A commercial microscope (Nikon Eclipse Ci) for capturing the H&E stained tissue slide images. (d) High-resolution images acquired by the microscope. (e) Simulated low-resolution images. (f) The training SRGAN- ResNeXt network. (g) The unseen low-resolution image. (h) The generator model from SRGAN-ResNeXt. (i) The generated high-resolution image. (j) The Inception U-net Model for segmentation. (k) The segmented H&E image.
Fig. 2.
Fig. 2.
Super-resolution generative adversarial network-based on SRGAN-ResNeXt. (a) Generator model. (b) Discriminator model. (c) The combined models so-called adversarial model for training Generator model.
Fig. 3.
Fig. 3.
Data set preparation for training SRGAN-ResNeXt, cropped image with 50% overlapping area. (a) Large field of view H&E image, (b) The small patches of the large image (a) with 50% overlapping area.
Fig. 4.
Fig. 4.
Inception U-net architecture for H&E image segmentation. Every single blue box corresponds to a multi-channel feature map. The value over the boxes represents the number of channels.
Fig. 5.
Fig. 5.
Jointly trained SRGAN-ResNeXt Model and Inception U-net Model. (a) The assembled models for the jointly trained generator (JTG) Model. (b) The assembled models for the jointly trained Inception U-net (JTIU) Model.
Fig. 6.
Fig. 6.
The whole slide image (WSI) of a breast tumor H&E slide and the result of our deep learning model. (a1, b1, and c1) The high-resolution images of the WSI from different areas. (a2, b2, and c2) The low-resolution images. (a3, b3, and c3) The reconstructed high-resolution images using our deep learning model (SRGAN-ResNeXt). (a4, b4, and c4) The corresponding nuclei segmentation to (a3, b3, and c3) using the Inception U-net Model.
Fig. 7.
Fig. 7.
The H&E image segmentation of the low-resolution image and the enhanced-resolution image. (a1-a2) The low-resolution image and its segmentation image (output of the Inception U-net). (b1-b2) The enhanced-resolution image (output of the SRGAN-ResNeXt) and its segmentation image (output of the Inception U-net). (c1-c2) The ground truth of the high-resolution image and the segmentation image. (g) Ground truth preparation for both of the high-resolution image and the segmented image.
Fig. 8.
Fig. 8.
Comparison of the results for our deep-learning model based on ResNeXt against bicubic interpolation of the low-resolution image, SRGAN, SRGAN-Transformer, and SRGAN-DenseNet. (a) The original ground truth image. (b) Bicubic interpolation of the low-resolution image. (c) The SRGAN result. (d) The SRGAN-Transformer result. (e) the SRGAN-DenseNet result. (f) Our model result. (g1-g6) Enlarged image in the red boxes from (a-f), respectively. (h1-h6) Enlarged images in the yellow boxes from (a-f), respectively.
Fig. 9.
Fig. 9.
Comparison results between the traditional U-net and Inception U-net by using H&E images and ground truth from the dataset [33]. (a) A low density of nuclei H&E image. (b) A high density of nuclei H&E image. The results from both models have been colored code such that green denotes false negative, yellow denotes true positive, and red denotes false positive pixels.
Fig. 10.
Fig. 10.
The improvement of the SRGAN-ResNeXt and Inception U-net after training them jointly. (a) Low-resolution image input. (b1-b2) The ResNeXt generator and Inception U-net models’ results. (c1-c2) The jointly trained models’ results. (d1-d2) High-resolution and segmentation ground truth images.

Similar articles

Cited by

References

    1. Litjens G., Sánchez C. I., Timofeeva N., Hermsen M., Nagtegaal I., Kovacs I., Hulsbergen-Van De Kaa C., Bult P., Van Ginneken B., Van Der Laak J., “Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis,” Sci. Rep. 6(1), 26286–11 (2016).10.1038/srep26286 - DOI - PMC - PubMed
    1. Mendez A. J., Tahoces P. G., a M., Lado J., Souto M., Vidal J. J., “Computer-aided diagnosis: Automatic detection of malignant masses in digitized mammograms,” Med. Phys. 25(6), 957–964 (1998).10.1118/1.598274 - DOI - PubMed
    1. Bogoch I. I., Koydemir H. C., Tseng D., Ephraim R. K., Duah E., Tee J., Andrews J. R., Ozcan A., “Evaluation of a mobile phone-based microscope for screening of Schistosoma haematobium infection in rural Ghana,” Riv. Nuovo Cimento Soc. Ital. Fis. 96(6), 1468–1471 (2017).10.4269/ajtmh.16-0912 - DOI - PMC - PubMed
    1. Petti C. A., Polage C. R., Quinn T. C., Ronald A. R., Sande M. A., “Laboratory medicine in Africa: a barrier to effective health care,” Clin. Infect. Dis. 42(3), 377–382 (2006).10.1086/499363 - DOI - PubMed
    1. Colley D. G., Bustinduy A. L., Secor W. E., King C. H., “Human schistosomiasis,” Lancet 383(9936), 2253–2264 (2014).10.1016/S0140-6736(13)61949-2 - DOI - PMC - PubMed

LinkOut - more resources