Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Jan 5;118(1):e2021446118.
doi: 10.1073/pnas.2021446118.

Deep learning for in vivo near-infrared imaging

Affiliations

Deep learning for in vivo near-infrared imaging

Zhuoran Ma et al. Proc Natl Acad Sci U S A. .

Abstract

Detecting fluorescence in the second near-infrared window (NIR-II) up to ∼1,700 nm has emerged as a novel in vivo imaging modality with high spatial and temporal resolution through millimeter tissue depths. Imaging in the NIR-IIb window (1,500-1,700 nm) is the most effective one-photon approach to suppressing light scattering and maximizing imaging penetration depth, but relies on nanoparticle probes such as PbS/CdS containing toxic elements. On the other hand, imaging the NIR-I (700-1,000 nm) or NIR-IIa window (1,000-1,300 nm) can be done using biocompatible small-molecule fluorescent probes including US Food and Drug Administration-approved dyes such as indocyanine green (ICG), but has a caveat of suboptimal imaging quality due to light scattering. It is highly desired to achieve the performance of NIR-IIb imaging using molecular probes approved for human use. Here, we trained artificial neural networks to transform a fluorescence image in the shorter-wavelength NIR window of 900-1,300 nm (NIR-I/IIa) to an image resembling an NIR-IIb image. With deep-learning translation, in vivo lymph node imaging with ICG achieved an unprecedented signal-to-background ratio of >100. Using preclinical fluorophores such as IRDye-800, translation of ∼900-nm NIR molecular imaging of PD-L1 or EGFR greatly enhanced tumor-to-normal tissue ratio up to ∼20 from ∼5 and improved tumor margin localization. Further, deep learning greatly improved in vivo noninvasive NIR-II light-sheet microscopy (LSM) in resolution and signal/background. NIR imaging equipped with deep learning could facilitate basic biomedical research and empower clinical diagnostics and imaging-guided surgery in the clinic.

Keywords: deep learning; near-infrared imaging; second near-infrared window.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interest.

Figures

Fig. 1.
Fig. 1.
CycleGAN-based NIR-IIa–to–NIR-IIb image transfer. (A) Comparison of NIR-IIa and NIR-IIb images. A balb/c mouse was injected with p-FE and P3-QDs at the same time and excited by an 808-nm laser. A 1,000-nm long-pass filter and a 1,200-nm short-pass filter were used to collect the NIR-IIa image, and a 1,500-nm long-pass filter was used to collect the NIR-IIb image. (Scale bar, 5 mm.) (B) Cross‐sectional intensity profiles of the same area (labeled in A) imaged in the NIR-IIa and NIR-IIb windows. (C) Training process of the CycleGAN model. An NIR-IIa image was randomly selected from the training set and processed by the generator GA to obtain a generated NIR-IIb image, which was used as input for another generator GB to reconstruct the original NIR-IIa image. A discriminator DB was trained to tell whether an NIR-IIb image was real or generated. A cycle consistency loss (Lcyc) was defined to ensure meaningful image-to-image translation. The overall loss is a weighted sum of the adversarial loss (Ladv) and the cycle consistency loss (L(GA,GB,DA,DB)=Ladv(GA,DB)+Ladv(GA,DB)+λLcyc(GA,GB)).
Fig. 2.
Fig. 2.
Wide-field fluorescence imaging with GAN. (A) Examples of real NIR-IIa images and generated images. (Scale bar, 1 cm.) (B) In vivo fluorescence imaging of a balb/c mouse injected with p-FE and P3-QDs. The NIR-IIa image was processed by the generator GA to obtain the contrast-enhanced image. (Scale bar, 5 mm.) (C) In vivo fluorescence imaging of a balb/c mouse injected with ICG and QDs and the images generated by the U-Net generator. (Scale bar, 1 cm.) (Fig. 2C: reproduced with permission from ref. .) (D) Cross‐sectional intensity profiles of the same vessel (labeled in B) in the NIR-IIa, NIR-IIb, and generated images. (E) Normalized fluorescence intensity of the lines shown in C. Fluorescence intensity in D and E was normalized by the maximum intensity on the line. (F) A balb/c mouse was injected with IR783@BSA-GSH complex and imaged in the NIR-I window using a CRi’s Maestro in vivo imaging system with an exposure time of 100 ms at 5 min post injection (42). The trained generator GA was used to transform the NIR-I image to a high-resolution image. (Scale bar, 1 cm.)
Fig. 3.
Fig. 3.
Molecular imaging with CycleGAN. (A) Conjugation of IRDye800-NHS to Cetuximab. (B and C) Nude mice (n = 3) with SCC-1 tumors were injected with IR800CW-Cetuximab. The NIR-I (B, 900–1,000 nm) and NIR-IIa (C, >1,100 nm) images were taken at 24 h post injection. The trained U-Net generator was used to process the original images. (Scale bar, 1 cm.) (D) High-resolution NIR-I (900–1,000 nm) imaging of an SCC-1 tumor at 24 h after the injection of IR800CW-Cetuximab. (Scale bar, 5 mm.) (E) Tumor-to-normal tissue signal ratio of the real and generated images in the NIR-I and NIR-IIa windows. (F) Fluorescence intensity of the lines shown in D.
Fig. 4.
Fig. 4.
Pix2pix-based NIR-IIa–to–NIR-IIb LSM image processing. (A) pix2pix model used for training. A pair of NIR-IIa and NIR-IIb LSM images were selected from the training set. The NIR-IIa image was processed by the generator GA to obtain a generated NIR-IIb image. The real or generated IIb image was concatenated with the real IIa image and used as an input of the discriminator DB. The overall loss is a weighted sum of the adversarial loss (Ladv) and the L1 distance between the real and generated IIb images. (L(GA,DB)=Ladv(GA,DB)+LL1(GA).) (B) LSM images at different depths from the training set (9). (Scale bar, 200 µm.)
Fig. 5.
Fig. 5.
LSM image processing by pix2pix GAN. (A) LSM images at different depths. (Scale bar, 200 µm.) (B and C) Comparison of FWHM (B) and SBR (C) at various depths. The error bars in B and C represent the SD of 5 data at each depth.

References

    1. Welsher K., et al. , A route to brightly fluorescent carbon nanotubes for near-infrared imaging in mice. Nat. Nanotechnol. 4, 773–780 (2009). - PMC - PubMed
    1. Hong G., et al. , In vivo fluorescence imaging with Ag2S quantum dots in the second near-infrared region. Angew. Chem. Int. Ed. Engl. 51, 9818–9821 (2012). - PubMed
    1. Hong G., et al. , Ultrafast fluorescence imaging in vivo with conjugated polymer fluorophores in the second near-infrared window. Nat. Commun. 5, 4206 (2014). - PubMed
    1. Antaris A. L., et al. , A small-molecule dye for NIR-II imaging. Nat. Mater. 15, 235–242 (2016). - PubMed
    1. Cosco E. D., et al. , Flavylium polymethine fluorophores for near- and shortwave infrared imaging. Angew. Chem. Int. Ed. Engl. 56, 13126–13129 (2017). - PubMed

Publication types