Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Dec 7;14(1):65-80.
doi: 10.1364/BOE.476737. eCollection 2023 Jan 1.

Deep learning-based high-speed, large-field, and high-resolution multiphoton imaging

Affiliations

Deep learning-based high-speed, large-field, and high-resolution multiphoton imaging

Zewei Zhao et al. Biomed Opt Express. .

Abstract

Multiphoton microscopy is a formidable tool for the pathological analysis of tumors. The physical limitations of imaging systems and the low efficiencies inherent in nonlinear processes have prevented the simultaneous achievement of high imaging speed and high resolution. We demonstrate a self-alignment dual-attention-guided residual-in-residual generative adversarial network trained with various multiphoton images. The network enhances image contrast and spatial resolution, suppresses noise, and scanning fringe artifacts, and eliminates the mutual exclusion between field of view, image quality, and imaging speed. The network may be integrated into commercial microscopes for large-scale, high-resolution, and low photobleaching studies of tumor environments.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflicts of interest.

Figures

Fig. 1.
Fig. 1.
Multiphoton microscopy. a Commercial multiphoton microscope with a fast galvo-resonant scanning system and slow dual-axis galvo scanning system. b Composition of low-quality input and high-quality target images (SHG and TPEF superimposed channels) in datasets. SL: scan lens; TL: tube lens; DM: dichroic mirror; CL: collect lens; OB: objective.
Fig. 2.
Fig. 2.
Deep learning network architecture. a Overall network architecture, including IW, SAPCD, and RRDAB modules, convolution layers, the skip connection, up-sampling operation, and discriminator. b Image registration framework. Image warping is performed by the ORB feature extraction method to construct the training dataset. The self-alignment pyramid, cascading, and deformable convolutions (SAPCD) are embedded in the generator. c RRDAB reconstruction network. Cascaded RRDAB modules are used for image reconstruction, and dense connectivity DAB for feature communication. The attention blocks consist of two SE modules. DConv: deformable convolution; L: level; LR: low-resolution; HR: high-resolution; DAB: dense attention block; IW: image warp; FA: feature alignment; HR: high resolution; GT: ground truth.
Fig. 3.
Fig. 3.
Deep learning-enhanced multiphoton images with different scanners. a-c Input, result, and target GT large images, respectively. ROI1: superimposed channel; ROI2: TPEF channel; ROI3: SHG channel. d Intensity profiles along the solid line in ROI3. e PSNR of input and output images for n = 10 large images; error bars indicate the mean standard deviation (SD). f Comparison of network inference and GT acquisition time. The values of each scale bar are marked in a.
Fig. 4.
Fig. 4.
Deep learning enables large-field and high-resolution multiphoton imaging. a-f SHG- and TPEF-superimposed channels of input, result, and target GT large images. g Comparison of network inference and GT acquisition time for n = 15 large images. h Curve fitting of intensity profiles along the solid lines in the ROIs of a-c. i NIQE and PIQE of input and output images for n = 200 image tiles; error bars show the mean standard deviation (SD). GT: ground truth.
Fig. 5.
Fig. 5.
Deep learning enables low photobleaching imaging. a-c Low-power excitation input and network’s output of SHG (cyan), TPEF (magenta), and superimposed channels. d-f ROIs of different sizes in a-c. g, h Intensity profiles along the solid line in d, e. i, j ROIs of background areas in f and the corresponding FFT result. N: noise; S: signal; SFA: scanning fringe artifacts; FFT: fast Fourier transform.
Fig. 6.
Fig. 6.
Evaluation of IW module on image reconstruction. a Schematic of image misalignment and correction (cyan channel: SHG; magenta channel: TPEF). b Results of registered training, non-registered training, and comparison of RRDAB and RCAN.

Similar articles

Cited by

References

    1. Wang H., Rivenson Y., Jin Y., Wei Z., Gao R., Günaydın H., Bentolila L. A., Kural C., Ozcan A., “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).10.1038/s41592-018-0239-0 - DOI - PMC - PubMed
    1. Belthangady C., Royer L. A., “Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction,” Nat. Methods 16(12), 1215–1225 (2019).10.1038/s41592-019-0458-z - DOI - PubMed
    1. Nehme E., Weiss L. E., Michaeli T., Shechtman Y., “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica 5(4), 458–464 (2018).10.1364/OPTICA.5.000458 - DOI
    1. Hollon T. C., Pandian B., Adapa A. R., Urias E., Save A. V., Khalsa S. S. S., Eichberg D. G., D’Amico R. S., Farooq Z. U., Lewis S., Petridis P. D., Marie T., Shah A. H., Garton H. J. L., Maher C. O., Heth J. A., McKean E. L., Sullivan S. E., Hervey-Jumper S. L., Patil P. G., Thompson B. G., Sagher O., McKhann G. M., II, Komotar R. J., Ivan M. E., Snuderl M., Otten M. L., Johnson T. D., Sisti M. B., Bruce J. N., Muraszko K. M., Trautman J., Freudiger C. W., Canoll P., Lee H., Camelo-Piragua S., Orringer D. A., “Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks,” Nat. Med. 26(1), 52–58 (2020).10.1038/s41591-019-0715-9 - DOI - PMC - PubMed
    1. Abdolghader P., Ridsdale A., Grammatikopoulos T., Resch G., “Unsupervised hyperspectral stimulated Raman microscopy image enhancement: denoising and segmentation via one-shot deep learning,” Opt. Express 29(21), 34205–34219 (2021).10.1364/OE.439662 - DOI - PubMed

LinkOut - more resources