Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Jan 28;14(1):2335.
doi: 10.1038/s41598-024-52833-7.

Dual contrastive learning based image-to-image translation of unstained skin tissue into virtually stained H&E images

Affiliations

Dual contrastive learning based image-to-image translation of unstained skin tissue into virtually stained H&E images

Muhammad Zeeshan Asaf et al. Sci Rep. .

Abstract

Staining is a crucial step in histopathology that prepares tissue sections for microscopic examination. Hematoxylin and eosin (H&E) staining, also known as basic or routine staining, is used in 80% of histopathology slides worldwide. To enhance the histopathology workflow, recent research has focused on integrating generative artificial intelligence and deep learning models. These models have the potential to improve staining accuracy, reduce staining time, and minimize the use of hazardous chemicals, making histopathology a safer and more efficient field. In this study, we introduce a novel three-stage, dual contrastive learning-based, image-to-image generative (DCLGAN) model for virtually applying an "H&E stain" to unstained skin tissue images. The proposed model utilizes a unique learning setting comprising two pairs of generators and discriminators. By employing contrastive learning, our model maximizes the mutual information between traditional H&E-stained and virtually stained H&E patches. Our dataset consists of pairs of unstained and H&E-stained images, scanned with a brightfield microscope at 20 × magnification, providing a comprehensive set of training and testing images for evaluating the efficacy of our proposed model. Two metrics, Fréchet Inception Distance (FID) and Kernel Inception Distance (KID), were used to quantitatively evaluate virtual stained slides. Our analysis revealed that the average FID score between virtually stained and H&E-stained images (80.47) was considerably lower than that between unstained and virtually stained slides (342.01), and unstained and H&E stained (320.4) indicating a similarity virtual and H&E stains. Similarly, the mean KID score between H&E stained and virtually stained images (0.022) was significantly lower than the mean KID score between unstained and H&E stained (0.28) or unstained and virtually stained (0.31) images. In addition, a group of experienced dermatopathologists evaluated traditional and virtually stained images and demonstrated an average agreement of 78.8% and 90.2% for paired and single virtual stained image evaluations, respectively. Our study demonstrates that the proposed three-stage dual contrastive learning-based image-to-image generative model is effective in generating virtual stained images, as indicated by quantified parameters and grader evaluations. In addition, our findings suggest that GAN models have the potential to replace traditional H&E staining, which can reduce both time and environmental impact. This study highlights the promise of virtual staining as a viable alternative to traditional staining techniques in histopathology.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Figure 1
Figure 1
Traditional staining workflow (Upper) vs Virtual Staining workflow (Below). The chemical process of staining has been replaced with deep learning-based virtual staining.
Figure 2
Figure 2
Skin tissue dataset. Row 1 shows three whole slide tissue sample pairs (unstained and stained with H&E). Rows 2–5 show patch pairs (unstained and stained with H&E).
Figure 3
Figure 3
Overview of the proposed virtual H&E staining workflow showing preprocessing, training, and inference stages.
Figure 4
Figure 4
DCLGAN architecture involves learning two mappings: G1: A → B and G2: B → A. The encoded half of G1 and G2 is then labeled as G1enc and G2enc, respectively. G1enc and HA serve as the embedding for A, while G2enc and HB serve as the embedding for B.
Figure 5
Figure 5
Patchwise contrastive learning maximizes mutual information between input and output patches, enabling one-sided translation in unpaired settings. This is done using a multilayer patchwise contrastive loss that maximizes mutual information between corresponding input and output patches and its corresponding input patch (positive example v+) over other random patches (negative example v-).
Figure 6
Figure 6
The Patch Noise Contrastive Estimation loss helps the generated image patch to look more like its real input shown in blue) while making it less like the other unrelated patches (shown in red).
Figure 7
Figure 7
The top image demonstrates the patchy appearance of DCLGAN’s output with artifacts. In contrast, the bottom tissue image shows the result after overlapping and blending patches, resulting in a smooth seamless image without any artifacts.
Figure 8
Figure 8
The web application interface for dermatopathologists to evaluate virtual and histologically stained tissues.
Figure 9
Figure 9
Comparative result of different virtual staining methods. The image shows the input unstained tissue, the ground truth H&E stain, and the virtual stains produced by Cycle GAN, CUTGAN, and proposed DCLGAN models.
Figure 10
Figure 10
Unstained, H&E stained and virtually stained patches along with respective quantified average FID and KID scores.
Figure 11
Figure 11
Grading of virtually stained single images for the following seven features: color, resolution, sharpness, contrast, brightness, uniformity of tissue illumination, and absence of artifacts, grading them as Adequate (blue) or Inadequate (pink).
Figure 12
Figure 12
Comparison of assessment made by three graders on the similarity and quality of H&E stained and virtually stained images. The evaluators made a side-by-side comparison between images and rated agreement, disagreement, and neutrality across each pair for different features.

References

    1. Alturkistani, H. A., Tashkandi, F. M. & Mohammedsaleh, Z. M. Histological stains: A literature review and case study. Glob. J. Health Sci.8(3), 72. 10.5539/GJHS.V8N3P72 (2016).10.5539/GJHS.V8N3P72 - DOI - PMC - PubMed
    1. Boktor, M. et al. Virtual histological staining of label-free total absorption photoacoustic remote sensing (TA-PARS). Sci. Rep.12(1), 10296. 10.1038/s41598-022-14042-y (2022). 10.1038/s41598-022-14042-y - DOI - PMC - PubMed
    1. de Haan, K. et al. Deep learning-based transformation of H&E stained tissues into special stains. Nat. Commun.12(1), 4884. 10.1038/s41467-021-25221-2 (2021). 10.1038/s41467-021-25221-2 - DOI - PMC - PubMed
    1. Kleczek, P., Jaworek-Korjakowska, J. & Gorgon, M. A novel method for tissue segmentation in high-resolution H&E-stained histopathological whole-slide images. Comput. Med. Imaging Graph.79, 101686. 10.1016/J.COMPMEDIMAG.2019.101686 (2020). 10.1016/J.COMPMEDIMAG.2019.101686 - DOI - PubMed
    1. Rivenson, Y., de Haan, K., Wallace, W. D. & Ozcan, A. Emerging advances to transform histopathology using virtual staining. BME Front.10.34133/2020/9647163 (2020). 10.34133/2020/9647163 - DOI - PMC - PubMed

LinkOut - more resources