Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Dec 28;14(1):31366.
doi: 10.1038/s41598-024-82839-0.

Synthetic polarization-sensitive optical coherence tomography using contrastive unpaired translation

Affiliations

Synthetic polarization-sensitive optical coherence tomography using contrastive unpaired translation

Thanh Dat Le et al. Sci Rep. .

Abstract

Polarization-sensitive optical coherence tomography (PS-OCT) measures the polarization state of backscattered light from tissues and provides valuable insights into the birefringence properties of biological tissues. Contrastive unpaired translation (CUT) was used in this study to generate a synthetic PS-OCT image from a single OCT image. The challenges related to extensive data requirements relying on labeled datasets using only pixel-wise correlations that make it difficult to efficiently regenerate the periodic patterns observed in PS-OCT images were addressed. The CUT model captures birefringence patterns by leveraging patch-wise correlations from unpaired data, which allows learning of the underlying structural features of biological tissues responsible for birefringence. To demonstrate the performance of the proposed approach, three generative models (Pix2pix, CycleGAN, and CUT) were compared on an in vivo dataset of injured mouse tendons over a six-week healing period. CUT outperformed Pix2pix and CycleGAN by producing high-fidelity synthetic PS-OCT images that closely matched the original PS-OCT images. Pearson correlation and two-way ANOVA tests confirmed the superior performance of CUT (p-value < 0.0001) over the comparison models. Additionally, a ResNet-152 classification model was used to assess tissue damage, which achieved an accuracy of up to 90.13% compared to the original PS-OCT images. This research demonstrates that CUT is superior to conventional methods for generating high-quality synthetic PS-OCT images and offers better improvements in most scenarios, in terms of efficiency and image fidelity.

PubMed Disclaimer

Conflict of interest statement

Declarations. Ethics Statement: The authors confirm that all methods are based on relevant guidelines and regulations, and that research has been conducted in accordance with the guidelines of ARRIVE. ( https://arriveguidelines.org ). Competing interests: The authors declare no competing interests.

Figures

Fig. 1
Fig. 1
Synthetic PS-OCT images of the injured mouse tendon fibrous structure healing process. (a) Original OCT images. PS-OCT from (c) Pix2pix, (e) CycleGAN, and (g) CUT. Difference maps between (b) original PS-OCT and (d) Pix2pix, (f) CycleGAN, and (h) CUT. Tendon fibrous structure healing process at the end of (i) week 0, (ii) week 2, (iii) week 4, and (iv) week 6. (kn) Quantitative phase difference mean values for the five regions of interest (ROIs, purple dotted boxes). Blue arrows: original data flow, yellow arrows: training data flow, and green arrow: pre-trained GAN pathway. Scale bar: 1 mm.
Fig. 2
Fig. 2
Similarity evaluations of the repetition frequencies in the Fourier domains of the (a) original PS-OCT compared to those of the (b) synthetic images obtained using Pix2pix, CycleGAN, and CUT based on the (i-iv) number of epochs (ns: not significant, ****: 0.0001 < p < 0.001).
Fig. 3
Fig. 3
Similarity evaluations in the spatial domain based on (a) similarity structure index measurement and (b) Frechet inception distance (ns: not significant, ****: 0.0001 < p < 0.001).
Fig. 4
Fig. 4
Classification of damaged levels based on training with ResNet-152 on the original and synthetic PS-OCT images using Pix2pix, CycleGAN, and CUT for (a) week 0, (b) week 2, (c) week 4, and (d) week 6. Class 1: almost all the collagen is aligned in a parallel manner in the image; Class 2: more than 50% of the collagen is aligned; Class 3: less than 50% of the collagen is aligned; Class 4: it is difficult to observe the direction of collagen alignment clearly (ns: not significant, ***: 0.001 < p < 0.01, ****: 0.0001< p<0.001).
Fig. 5
Fig. 5
Architectures of the GAN (a) training model and (b) test workflows between Pix2pix, CycleGAN, and CUT; (c) model performance comparisons with (i-ii) generator/discriminator losses. G: generator, D: discriminator.
Fig. 6
Fig. 6
(a) Classification labeling. (b) ResNet-152 architecture.

Similar articles

Cited by

References

    1. Schmitt, J. M. Optical coherence tomography (OCT): a review. IEEE J. Sel. Top. Quantum Electron.5, 1205–1215 (1999).
    1. Everett, M., Magazzeni, S., Schmoll, T. & Kempe, M. Optical coherence tomography: from technology to applications in ophthalmology. Translational Biophotonics. 3, e202000012 (2021).
    1. Wan, B. et al. Applications and future directions for optical coherence tomography in dermatology. Br. J. Dermatol.184, 1014–1022 (2021). - PubMed
    1. Araki, M. et al. Optical coherence tomography in coronary atherosclerosis assessment and intervention. Nat. Rev. Cardiol.19, 684–703 (2022). - PMC - PubMed
    1. Matthews, T. J. & Adamson, R. Optical coherence tomography: current and future clinical applications in otology. Curr. Opin. Otolaryngol. Head Neck Surg.28, 296 (2020). - PubMed

Publication types

LinkOut - more resources