Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Sep 12;11(37):eadr6687.
doi: 10.1126/sciadv.adr6687. Epub 2025 Sep 12.

DeepInMiniscope: Deep learning-powered physics-informed integrated miniscope

Affiliations

DeepInMiniscope: Deep learning-powered physics-informed integrated miniscope

Feng Tian et al. Sci Adv. .

Abstract

Mask-based integrated fluorescence microscopy is a compact imaging technique for biomedical research. It can perform snapshot 3D imaging through a thin optical mask with a scalable field of view (FOV). Integrated microscopy uses computational algorithms for object reconstruction, but efficient reconstruction algorithms for large-scale data have been lacking. Here, we developed DeepInMiniscope, a miniaturized integrated microscope featuring a custom-designed optical mask and an efficient physics-informed deep learning model that markedly reduces computational demand. Parts of the 3D object can be individually reconstructed and combined. Our deep learning algorithm can reconstruct object volumes over 4 millimeters by 6 millimeters by 0.6 millimeters. We demonstrated substantial improvement in both reconstruction quality and speed compared to traditional methods for large-scale data. Notably, we imaged neuronal activity with near-cellular resolution in awake mouse cortex, representing a substantial leap over existing integrated microscopes. DeepInMiniscope holds great promise for scalable, large-FOV, high-speed, 3D imaging applications with compact device footprint.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.. Overview of DeepInMiniscope.
(A) Schematic of DeepInMiniscope. The raw measurement from the miniaturized integrated microscope is sent to a deep neural network, which reconstructs the object in 3D. (B) Application of DeepInMiniscope in neural activity imaging in mouse brain in vivo. Left: A 3D volume in the visual cortex highlighting the reconstructed active neurons (shown as maximum projection of temporal activity). Right: The normalized extracted temporal activity of representative neurons. (C to L) Application of DeepInMiniscope in fluorescence imaging of thin fluorescent samples and the 2D reconstructions. From left to right column, the samples are lens tissue, square grid patterns (12-μm line width), long line patterns (20-μm line width), polygons and letters, C. elegans embryos, and C. elegans body. (C) Raw measurements by DeepInMiniscope. (D) Reconstruction using a list-based RL algorithm. (E) Reconstruction using a multi-local-FOV ADMM-Net. (F) Ground truth obtained by a benchtop microscope. (G) Reconstruction using a single-FOV Hadamard-Net. (H) Reconstruction using a single-FOV ADMM-Net. (I) Reconstruction using a multi-global-FOV Hadamard-Net without PSF initialization. (J) Reconstruction using a multi-global-FOV Hadamard-Net with PSF initialization. (K) Reconstruction using a multi-global-FOV Wiener-Net without PSF initialization. (L) Reconstruction using a multi-global-FOV Wiener-Net with PSF initialization. Scale bars, 300 μm (B) and 500 μm [(C) and (D)].
Fig. 2.
Fig. 2.. Assembly of DeepInMiniscope and the microlens array.
(A) Exploded view of DeepInMiniscope, illustrating the 3D printed housing, stack of optical filters with microlens array fabricated on top, and CMOS camera on the PCB board. (B) Side view of DeepInMiniscope. (C) Assembled device illustrating the imaging window with the microlens array. (D) Illumination intensity distribution across the FOV at sample plane with two fiber illumination channels, simulated by ray tracing followed by a convolution with a Gaussian kernel with a size of 100 μm by 100 μm. The raw results had discontinuity artifacts due to the limited number of rays, which could be suppressed by the convolution. (E) Structure of a doublet lets unit. (F) Fabricated microlens array observed from an optical microscope. (G) Normalized peak intensity of the PSF from a point source. Red/blue trace, simulated result from a doublet lens unit/singlet lens unit optimized for imaging quality within 500-μm object height. Dot, experimental measurement of the doublet lens unit from a point source in 4 μm in diameter. The effective imaging area of the lens unit is defined when the peak intensity of the PSF drops below 80% of the maximum value. (H) Experimentally measured image of a point source in 4 μm in diameter. (I) Number of subimages obtained from the microlens array at each object location, assuming the effective FOV of each lens unit is 500 μm in radius. The microlens units are marked with black circles. Scale bars, 500 μm (D) and 300 μm (F). A.U., arbitrary units.
Fig. 3.
Fig. 3.. Architecture of ADMM-Net, illustrated as single-FOV to reconstruct a single 3D volume.
The raw measurement is preprocessed by a 2D CNN, which could denoise and suppress the background of the raw image. The preprocessed raw image is then sent to the ADMM-Net, which contains multiple stages. Each stage contains a deconvolution module (stage 0) or convolution module (subsequent stages) to update the reconstructed image X^ , a CNN denoiser to update the regularized primal Z^ , and a mathematical layer to calculate the dual variable U^ . LCL is a closed-loop loss function defined as the SSIM loss between the estimated image based on the reconstruction and the denoised and background-suppressed raw measurement. L 3D–view is a loss function defined as the sum SSIM loss between the projected xy, yz, and xz views from the reconstructed volume and the corresponding ground truth.
Fig. 4.
Fig. 4.. 3D reconstruction of fluorescent beads distributed in a 3D volume.
(A) Fluorescent beads phantom (fluorescent beads in 5 μm in diameter distributed in optical clear polymer) imaged by a benchtop microscope with 2× objective lens (left, xy view) and 10× objective lens (right, xy, yz, and xz views, 13 axial planes, each separated by 50 μm). The image from the 10× objective lens (right) is a zoom-in view of the region inside the orange dashed box in the image from the 2× objective lens (left). (B) 3D reconstruction results within 600-μm depth range through a multi-local-FOV ADMM-Net, in xy, xz, and yz views. The green/orange dashed box in the xy view (left) corresponds to the FOV of the image from the 2×/10× objective lens respectively in (A). Right: Zoom-in view of the region inside the orange dashed box in the left image. (C) Same as (B) but with a list-based RL algorithm. (D) Histogram of the axial FWHM of individual beads reconstructed from the multi-local-FOV ADMM-Net (left) and list-based RL algorithm (right). (E to H) 3D reconstruction of fluorescent beads in 12 μm in diameter distributed in a 3D scattering volume with a mean-free path (MFP) of (E) 50 μm, (F) 100 μm, (G) 250 μm, and (H) 500 μm. The optical clear polymer is mixed with fluorescent beads in 12 μm in diameter and nonfluorescent beads in 1.18 μm in diameter, whose concentration could be used to control the mean-free path. All reconstructions are within 4.2 mm by 5.8 mm by 600 μm volume range. Left/middle: Reconstruction through the multi-local-FOV ADMM-Net/list-based RL algorithm, in xy, xz, and yz views. Right: Reference image captured by a benchtop microscope with a 2× objective lens and the photograph showing the scattering phantom slide on top of a resolution target. Scale bars, 500 μm [(A) to (C), left)], 200 μm [(A) to (C), right], and 500 μm [(E) to (H)].
Fig. 5.
Fig. 5.. In vivo imaging of hydra labeled by GFP.
(A to D) Image and reconstruction of two hydras, both having endodermal cells (inner layer of the epithelial cell) labeled with GFP. The hydras were housed in a thin chamber in between the microscope slide and the cover slip. (A) A raw image frame from DeepInMiniscope. (B) 2D reconstruction of (A) by the multi-local-FOV ADMM-Net. (C) 2D reconstruction of (A) by the list-based RL algorithm. (D) 2D reconstruction of (A) by the single-FOV Hadamard-Net. (E) Reference image captured by a benchtop microscope with a 2× objective lens. The white arrow indicates the thin tentacles (~20 μm). (F to H) 3D reconstruction of a hydra at three different frames over 2-mm axial range, by the list-based RL algorithm. The color bar indicates the reconstruction depth. The hydra was housed in a well with a depth of 2 mm, which provided space for the 3D movement of the hydra. Scale bars, 500 μm [(A) to (H)].
Fig. 6.
Fig. 6.. In vivo calcium imaging of neural activity in mouse visual cortex, transfected with GCaMP6f.
(A) Experimental setup. The mouse was head-fixed on a treadmill, with DeepInMiniscope mounted on top of the headplate. Excitation light was delivered through the dual fiber channels. (B) A single raw image frame. (C) SD-DLM mask, which is the time-series SD of the difference–to–local-mean (DLM) of the raw video, followed by a LoG filtering. The SD-DLM mask highlights pixels with strong temporal dynamics. (D) A single frame of the DLM video (i.e., raw video processed by the DLM operation) weighted by the SD-DLM mask. (E) 3D reconstruction of the image volume (1.5 mm by 2 mm by 600 μm) by a multi-local-FOV ADMM-Net, in xy, yz, and xz views, showing the spatiotemporal correlation map of the reconstructed video. The multi-local-FOV ADMM-Net processed the SD-DLM mask weighted DLM video frames individually. The reconstructed time-series volume was then projected into a 3D volume showing the spatiotemporal correlation among adjacent voxels. This 3D volume was further processed by an iterative clustering algorithm (16) to highlight individual neurons (indicated as red color). The dashed line indicates the brain surface. (F to G) Histogram of the (F) lateral FWHM and (G) axial FWHM of all the clusters founds in the 3D volume of the spatiotemporal correlation map. (H to J) Three individual axial planes from (E), at depths of (H) 50 μm, (I) 150 μm, and (J) 300 μm. The individual neurons were indicated by red color. (K to M) Representative normalized temporal activity traces of the extracted neurons in (K) 50-μm, (L) 150-μm, and (M) 300-μm planes. Black, activity traces of the neurons directly extracted from the video reconstructed by multi-local-FOV ADMM-Net. Red, activity traces of the neurons extracted through CNMF-E (35) from the reconstructed video. Scale bars, 500 μm [(B) to (E) and (H)].
Algorithm 1.
Algorithm 1.. Establishing the two sets of lists (voxel-pixel mapping and pixel-voxel mapping).
Algorithm 2.
Algorithm 2.. List-based RL.

References

    1. Ghosh K. K., Burns L. D., Cocker E. D., Nimmerjahn A., Ziv Y., El Gamal A., Schnitzer M. J., Miniaturized integration of a fluorescence microscope. Nat. Methods 8, 871–878 (2011). - PMC - PubMed
    1. Aharoni D., Hoogland T. M., Circuit investigations with open-source miniaturized microscopes: Past, present and future. Front. Cell. Neurosci. 13, 141 (2019). - PMC - PubMed
    1. de Groot A., van den Boom B. J. G., van Genderen R. M., Coppens J., van Veldhuijzen J., Bos J., Hoedemaker H., Negrello M., Willuhn I., de Zeeuw C. I., Hoogland T. M., NINscope, a versatile miniscope for multi-region circuit investigations. eLife 9, e49987 (2020). - PMC - PubMed
    1. Qin Z., Chen C., He S., Wang Y., Tam K. F., Ip N. Y., Qu J. Y., Adaptive optics two-photon endomicroscopy enables deep-brain imaging at synaptic resolution over large volumes. Sci. Adv. 6, eabc6521 (2020). - PMC - PubMed
    1. Yanny K., Antipa N., Liberti W., Dehaeck S., Monakhova K., Liu F. L., Shen K., Ng R., Waller L., Miniscope3D: Optimized single-shot miniature 3D fluorescence microscopy. Light Sci. Appl. 9, 171 (2020). - PMC - PubMed

LinkOut - more resources