Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Jul 22;12(8):5214-5226.
doi: 10.1364/BOE.427099. eCollection 2021 Aug 1.

Deep learning-based autofocus method enhances image quality in light-sheet fluorescence microscopy

Affiliations

Deep learning-based autofocus method enhances image quality in light-sheet fluorescence microscopy

Chen Li et al. Biomed Opt Express. .

Erratum in

Abstract

Light-sheet fluorescence microscopy (LSFM) is a minimally invasive and high throughput imaging technique ideal for capturing large volumes of tissue with sub-cellular resolution. A fundamental requirement for LSFM is a seamless overlap of the light-sheet that excites a selective plane in the specimen, with the focal plane of the objective lens. However, spatial heterogeneity in the refractive index of the specimen often results in violation of this requirement when imaging deep in the tissue. To address this issue, autofocus methods are commonly used to refocus the focal plane of the objective-lens on the light-sheet. Yet, autofocus techniques are slow since they require capturing a stack of images and tend to fail in the presence of spherical aberrations that dominate volume imaging. To address these issues, we present a deep learning-based autofocus framework that can estimate the position of the objective-lens focal plane relative to the light-sheet, based on two defocused images. This approach outperforms or provides comparable results with the best traditional autofocus method on small and large image patches respectively. When the trained network is integrated with a custom-built LSFM, a certainty measure is used to further refine the network's prediction. The network performance is demonstrated in real-time on cleared genetically labeled mouse forebrain and pig cochleae samples. Our study provides a framework that could improve light-sheet microscopy and its application toward imaging large 3D specimens with high spatial resolution.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflicts of interest.

Figures

Fig. 1.
Fig. 1.
A schematic of the light-sheet excitation-detection module and the proposed deep neural network-based autofocus workflow. (a) An illustration of the geometry of light-sheet fluorescence microscopy (LSFM), and the drift in the relative position of the light-sheet and the focal plane of the detection objective (Δz), when imaging deep into the tissue. (b) Fluorescence images of in focus ( Δz=0μm ) and out-of-focus ( Δz=20μm ) neurons (left) and hair cells (right). The images were captured from a whole mouse brain and a pig cochlea that were tissue cleared for 3D volume imaging. The red boxes mark the locations of the zoom-in images at the bottom. The degradation in the quality of out-of-focus images can be observed. (c) Overview of the integration of the deep learning-based autofocus method with a custom-built LSFM. During image acquisition, two defocused images will be collected and sent to a classification network to estimate the defocus level. The color of the borders of each individual patch in the right image indicates the predicted defocus distance. In the color bar, the red and purple colors represent the extreme cases, in which the defocus distance is −36 µm and 36 µm respectively. The borders’ dominant color is green, which indicates that this image is in focus.
Fig. 2.
Fig. 2.
The training pipeline and the structure of the network. (a) Representative defocus stacks captured from tissue cleared whole mouse brains (first column) or intact cochleae (second column). In each stack, the distance between slices was 2 µm. From top to bottom we show representative images ( Δz=32,16,0,16,32μm ) . Spherical aberrations lead to asymmetrical point spread function (PSF) for defocused images above (Δz > 0) and below (Δz < 0) the focal plane. The network uses this PSF asymmetry to estimate whether Δz is positive or negative. In the training process, two defocused images with a constant distance between them ( Δse.g.,Δs=6μm ) and a known Δz are randomly selected from the stacks. The images are then randomly cropped into smaller image patches (128 × 128) and these patches are fed into the network. (b) The architecture of the network. The output of the network is a probability distribution function over N = 13 different values of Δz with constant bin size (Δb), which equals 6 µm. The value for N was determined empirically.
Fig. 3.
Fig. 3.
Training configurations that influence the classification accuracy. (a) The graphs show the training loss and classification accuracy as a function of the number of epochs. For comparison, the network is trained with one\two\three defocused images that are provided to the network as an input (N = 13, Δs = 6 μm). The graphs show that two (IΔz and IΔz+6μm) and three (IΔz6μm, IΔz, and IΔz+6μm) defocus images yield higher classification accuracy than a single defocused image ( IΔz ) . (b) Confusion matrices for a different number of defocused images that are provided to the network as input. Training with only one defocused stack shows inferior performance. (c) Training loss and classification accuracy as a function of the number of epochs using 2 defocused images as an input, but with variable spacing ( Δs ) between the images. The highest classification accuracy corresponds to Δs values of 6 and 10 μm.
Fig. 4.
Fig. 4.
Performance evaluation. (a) Performance comparison between Deep Neural Network (DNN), and traditional autofocus measures across 420 test cases. While only 2 defocused images are provided to the DNN, the traditional autofocus methods receive as an input 13 images. In both cases, the spacing between two consecutive images is 6 µm. On a single image patch with a size of ∼83 × 83 µm2 the DNN outperforms traditional autofocus measures, while on larger image patches (250 × 250 µm2) the DNN and DCTS achieve comparable results. Please note that for the larger image patch the DNN performs its calculation on nine (83 × 83 µm2) patches, and results with certainty (cert) above 0.35 are averaged to achieve the final prediction. (b) Representative examples of defocus level prediction ( Δzpredict ) by the DNN on the test dataset (single patch). Each box shows an individual and independent image patch, and the color of the border indicates the Δzpredict value. If the certainty of an image patch is lower than 0.35, the colored border is deleted, and this patch is discarded.
Fig. 5.
Fig. 5.
Real-time perturbation experiments in light-sheet fluorescence microscopy. (a1 and b1) The in-focus (Δz=0) images of neurons and hair cells, respectively. (a2 and b2) Images that show the same field of view as in a1 and b1 after the objective lens is displaced by 30 µm and −30 µm, respectively. (a3 and b3) Images of the same field of view after the objective is moved according to the network defocus evaluation as shown in a4 and b4. The improved image quality in a3 and b3 indicates that the network can estimate the defocus level and adjust the detection focal plane to improve image quality. In a and b, the white boxes mark the location of the zoom-in images, and the color-coded line profiles in a4 and b4 represent image intensities along the dashed lines in a and b.
Fig. 6.
Fig. 6.
Real-time perturbation experiments on unseen tissue type. (a1 and b1) The in-focus auto-fluorescence images of tissue cleared mouse lung samples, which are highly scattering. These samples exhibit different morphology than the brain and the cochlea, and the network is not trained on such samples/morphology. (a2 and b2) Images that show the same field of view as in a1 and b1 after the objective lens is displaced by −30 µm and −20 µm, respectively. (a3 and b3) Images of the same field of view after the objective is moved according to the network correction as shown in a4 and b4. The improved image quality in a3 and b3 indicates that the network can correctly estimate the defocus level and adjust the detection focal plane to improve image quality. Although further refinement might be required, the network can still generalize to unseen tissue types. Please note, in tissue cleared lung samples the auto-fluorescence is easily photo-bleached therefore making it especially suitable for autofocus methods that require as few defocused images as possible.

References

    1. Ahrens M. B., Orger M. B., Robson D. N., Li J. M., Keller P. J., “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nat. Methods 10(5), 413–420 (2013).10.1038/nmeth.2434 - DOI - PubMed
    1. Royer L. A., Lemon W. C., Chhetri R. K., Keller P. J., “A practical guide to adaptive light-sheet microscopy,” Nat. Protoc. 13(11), 2462–2500 (2018).10.1038/s41596-018-0043-4 - DOI - PubMed
    1. Hillman E. M. C., Voleti V., Li W., Yu H., “Light-sheet microscopy in neuroscience,” Annu. Rev. Neurosci. 42(1), 295–313 (2019).10.1146/annurev-neuro-070918-050357 - DOI - PMC - PubMed
    1. Weber M., Huisken J., “Light sheet microscopy for real-time developmental biology,” Curr. Opin. Genet. Dev. 21(5), 566–572 (2011).10.1016/j.gde.2011.09.009 - DOI - PubMed
    1. Royer L. A., Lemon W. C., Chhetri R. K., Wan Y., Coleman M., Myers E. W., Keller P. J., “Adaptive light-sheet microscopy for long-term, high-resolution imaging in living organisms,” Nat. Biotechnol. 34(12), 1267–1278 (2016).10.1038/nbt.3708 - DOI - PubMed

LinkOut - more resources