Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2014 Jun;11(6):645-8.
doi: 10.1038/nmeth.2929. Epub 2014 Apr 20.

Efficient Bayesian-based multiview deconvolution

Affiliations

Efficient Bayesian-based multiview deconvolution

Stephan Preibisch et al. Nat Methods. 2014 Jun.

Abstract

Light-sheet fluorescence microscopy is able to image large specimens with high resolution by capturing the samples from multiple angles. Multiview deconvolution can substantially improve the resolution and contrast of the images, but its application has been limited owing to the large size of the data sets. Here we present a Bayesian-based derivation of multiview deconvolution that drastically improves the convergence time, and we provide a fast implementation using graphics hardware.

PubMed Disclaimer

Conflict of interest statement

COMPETINGFINANCIAL INTERESTS

The authors declare no competing financial interests.

Figures

Fig. 1
Fig. 1. Principles and performance
(a) Basic layout of a light-sheet microscope capable of multiview acquisitions. (b) Illustration of ‘virtual’ views. A photon detected at a certain location in a view was emitted by a fluorophore in the sample; the PSF assigns a probability to every location in the underlying image having emitted that photon. Consecutively, the PSF of any other view assigns to each of its own locations the probability to detect a photon corresponding to the same fluorophore. (c) Example of an entire virtual view computed from observed view 1 and the knowledge of PSF 1 and PSF 2. (d) Convergence time of the different Bayesian-based methods. We used a known ground-truth image (Supplementary Fig. 5) and let all variations converge until they reached precisely the same quality. The increase in computation time for an increasing number of views of the combined methods (black) is due to the fact that with an increasing number of views, more computational effort is required to perform one update of the deconvolved image (Supplementary Fig. 4) (e) Convergence times for the same ground-truth image of our Bayesian-based methods compared to those of other optimized multiview deconvolution algorithms. The difference in computation time between Java implementations and IDL implementations, OSEM and SGP, results in part from nonoptimized IDL code. (f) Corresponding number of iterations for our algorithm and other optimized multiview deconvolution algorithms.
Fig. 2
Fig. 2. Deconvolution of simulated 3D multiview data
(a) Left, 3D rendering of a computer-generated volume resembling a biological specimen. The red outlines mark the wedge removed from the volume to show the content inside. Right, sections through the generated volume in the lateral direction (as seen by the SPIM camera, top) and along the rotation axis (bottom). (b) Same slices as in a with illumination attenuation applied (left), convolved with a PSF of a SPIM microscope (center) and simulated using a Poisson process (right). The bottom right panel shows the unscaled simulated light-sheet sectioning data along the rotation axis. (c) Slices from views 1 and 3 of the seven views generated from a by applying processes pictured in b and rescaling to isotropic resolution. These seven volumes are the input to the fusion and deconvolution algorithms quantified in d and visualized in e. (d) Cross-correlation of deconvolved and ground-truth data as a function of the number of iterations for MAPG and our algorithm with and without regularization (reg). The inset compares the computation (comp.) time. (Both algorithms were implemented in Java to support partially overlapping data sets; Supplementary Fig. 10). (e) Slices equivalent to c after content-based fusion (first column), MAPG deconvolution (second column), our approach without regularization (third column) and with regularization (fourth column; Tikhonov regularization parameter λ = 0.004). (f) Areas marked by boxes in a,c,e at higher magnification. Note the increased artificial ring patterns in MAPG.
Fig. 3
Fig. 3. Application to biological data
(a) Comparison of reconstruction results using content-based fusion (top row) and multiview deconvolution (bottom row) on a four-cell–stage C. elegans embryo expressing a PH domain–GFP fusion marking the membranes. Dotted lines mark plots shown in b; white arrowheads mark PSFs of a fluorescent bead before and after deconvolution. (b) Line plot through the volume along the rotation axis (yz, contrast locally normalized). This orientation typically shows the lowest resolution of a fused data set in light-sheet acquisitions, as all input views are oriented axially (Supplementary Fig. 11). SNR is substantially enhanced; arrowheads mark points illustrating increased resolution. (c,d) Cut planes through a blastoderm-stage Drosophila embryo expressing His-YFP in all cells. (e) Magnified view on parts of the Drosophila embryo. The left panel is a view in lateral orientation of one of the input views; the right panel shows a view along the rotation axis characterized by the lowest resolution. (f,g) Comparison of deconvolution and input data of a fixed L1 C. elegans larva expressing LMN-1–GFP (green) and stained with Hoechst (magenta). (f) Single slice through the deconvolved data set; arrowheads mark four locations of transversal cuts shown below. The cuts compare two orthogonal input views (0°, 90°) with the deconvolved data. No input view offers high resolution in this orientation approximately along the rotation axis. (g) The left box in the first row shows a random slice of a view in axial orientation (worst resolution). The second row shows a view in lateral orientation (best resolution). The third row shows the corresponding deconvolved image. The right boxes each show a slice through the nervous system. The alignment of the C. elegans L1 data set was refined using nuclear positions (Online Methods). The C. elegans embryo (a,b) and the Drosophila embryo (d,e) are each one time point of a time series (none of the other time points is used in this paper). The C. elegans L1 larva (f,g) is an individual acquisition of one fixed sample.

References

    1. Huisken J, Swoger J, Del Bene F, Wittbrodt J, Stelzer EHK. Science. 2004;305:1007–1009. - PubMed
    1. Keller PJ, Schmidt AD, Wittbrodt J, Stelzer EHK. Science. 2008;322:1065–1069. - PubMed
    1. Truong TV, Supatto W, Koos DS, Choi JM, Fraser SE. Nat Methods. 2011;8:757–760. - PubMed
    1. Swoger J, Verveer P, Greger K, Huisken J, Stelzer EHK. Opt Express. 2007;15:8029–8042. - PubMed
    1. Shepp LA, Vardi Y. IEEE Trans Med Imaging. 1982;1:113–122. - PubMed

MeSH terms