Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Mar;39(2):28-44.
doi: 10.1109/msp.2021.3119273. Epub 2022 Feb 24.

Unsupervised Deep Learning Methods for Biological Image Reconstruction and Enhancement: An overview from a signal processing perspective

Affiliations

Unsupervised Deep Learning Methods for Biological Image Reconstruction and Enhancement: An overview from a signal processing perspective

Mehmet Akçakaya et al. IEEE Signal Process Mag. 2022 Mar.

Abstract

Recently, deep learning approaches have become the main research frontier for biological image reconstruction and enhancement problems thanks to their high performance, along with their ultra-fast inference times. However, due to the difficulty of obtaining matched reference data for supervised learning, there has been increasing interest in unsupervised learning approaches that do not need paired reference data. In particular, self-supervised learning and generative models have been successfully used for various biological imaging applications. In this paper, we overview these approaches from a coherent perspective in the context of classical inverse problems, and discuss their applications to biological imaging, including electron, fluorescence and deconvolution microscopy, optical diffraction tomography and functional neuroimaging.

Keywords: Deep learning; biological imaging; image reconstruction; unsupervised learning.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
Overview of self-supervised learning for denoising. Black pixels denote masked-out locations in the images, while 1J is the indicator function on the indices specified by the index set J.
Fig. 2.
Fig. 2.
Overview of the self-supervised learning methods for image reconstruction using hold-out masking. Black pixels denote masked-out locations in the measurements and DC denotes the data consistency units of the unrolled network.
Fig. 3.
Fig. 3.
Denoising results from fluorescence microscopy datasets Fluo-N2DH-GOWT1 and Fluo-C2DL-MSC using a traditional denoising method BM3D and a self-supervised learning method Noise2Self (N2S). We note that supervised deep learning is not applicable as these datasets contain only single noisy images.
Fig. 4.
Fig. 4.
Reconstruction results from an fMRI application [6] using conventional split-slice GRAPPA technique and self-supervised multi-mask SSDU method [14]. (a) Split-slice GRAPPA exhibits residual artifacts in mid-brain (yellow arrows). Multi-mask SSDU alleviates these, along with visible noise reduction. (b) Temporal SNR (tSNR) maps show substantial gain with the self-supervised deep learning approach, particularly for subcortical areas and cortex further from the receiver coils. (c) Phase maps for the two reconstructions show strong agreement, with multi-mask SSDU containing more voxels above the coherence threshold.
Fig. 5.
Fig. 5.
Geometric view of deep generative models. Fixed distribution ζ in Z is pushed to μθ in X by the network Gθ, so that the mapped distribution μθ approaches the real distribution μ. In VAE, Gθ works as a decoder to generate samples, while Fϕ acts as an encoder, additionally constraining ζϕ to be as close to ζ. With such geometric view, auto-encoding generative models (e.g. VAE), and GAN-based generative models can be seen as variants of this single illustration.
Fig. 6.
Fig. 6.
VAE architecture. Fϕ encodes x, and combined with random sample u to produce latent vector z. Gθ decodes the latent z to acquire x^. u is sampled from standard normal distribution for the reparameterization trick. (a) VAE. (b) spatial-VAE [19], disentangling translation/rotation features from different semantics. (c) DIVNOISING [20], enabling superviesd/unsupervised training of denoising generative model by leveraging the noise model pNM(y|x).
Fig. 7.
Fig. 7.
Illustration of GAN-based methods for biological image reconstruction. (a) GAN, (b) pix2pix [21], (c) AmbientGAN [22], (d) cryoGAN [23]. x, y denote data in the image domain, and the measurement domain, respectively. G, D refers to generator, discriminator, respectively. H defines the function family of the forward measurement process, parameterized with φ. Networks and variables that are marked in blue have learnable parameters optimized with gradient descent.
Fig. 8.
Fig. 8.
Geometric view of cycleGAN. (Y,ν) is mapped to (X,μ) with Gθ, while Hφ does the opposite. The two mappers, i.e. generators are optimized by simultaneously minimizing d(μ, μθ), d(ν, νφ).
Fig. 9.
Fig. 9.
Network architecture of cycleGAN. Gθ:YX,Hφ:XY are the generators responsible for inter-domain mapping. DX, DY are discriminators, constructing LGAN. GAN loss is simultaneously optimized together with Lcycle
Fig. 10.
Fig. 10.
ProjectionGAN for the reconstruction of ODT [31]. (a) Conventional Rytov reconstruction via Fourier binning, (b) Gerchberg-Papoulis (GP) algorithm, (c) model-based iterative method using the total variation (TV), and (b) reconstruction via projectionGAN. Artifacts including elongation along the optical axes can be seen in the xz, yz cutview of (a),(c). The result shown in (b) is contaminated with resdual noise in the xz, yz planes. Result shown in (d) has high-resolution reconstruction without such artifacts, along with boosted RI values.

References

    1. Jing L and Tian Y, “Self-supervised visual feature learning with deep neural networks: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. 2, 6 - PubMed
    1. Krull A, Buchholz T-O, and Jug F, “Noise2Void-learning denoising from single noisy images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 2129–2137. 2, 12
    1. Batson J and Royer L, “Noise2Self: blind denoising by self-supervision,” in Proceedings of the International Conference on Machine Learning, 2019, pp. 524–533. 2, 9, 12
    1. Yaman B, Hosseini SAH, Moeller S, Ellermann J, Ugurbil K, and Akcakaya M, “Self-supervised learning of physics-guided reconstruction neural networks without fully-sampled reference data,” Magnetic Resonance in Medicine, vol. 84, no. 6, pp. 3172–3191, Dec 2020. 2, 9, 10, 11, 13 - PMC - PubMed
    1. Buchholz T-O, Krull A, Shahidi R, Pigino G, Jékely G, and Jug F, “Content-aware image restoration for electron microscopy,” Methods in Cell Biology, vol. 152, pp. 277–289, 2019. 2, 6, 11 - PubMed

LinkOut - more resources