Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Feb 1;135(3):jcs258994.
doi: 10.1242/jcs.258994. Epub 2022 Feb 10.

Label2label: training a neural network to selectively restore cellular structures in fluorescence microscopy

Affiliations

Label2label: training a neural network to selectively restore cellular structures in fluorescence microscopy

Lisa Sophie Kölln et al. J Cell Sci. .

Abstract

Immunofluorescence microscopy is routinely used to visualise the spatial distribution of proteins that dictates their cellular function. However, unspecific antibody binding often results in high cytosolic background signals, decreasing the image contrast of a target structure. Recently, convolutional neural networks (CNNs) were successfully employed for image restoration in immunofluorescence microscopy, but current methods cannot correct for those background signals. We report a new method that trains a CNN to reduce unspecific signals in immunofluorescence images; we name this method label2label (L2L). In L2L, a CNN is trained with image pairs of two non-identical labels that target the same cellular structure. We show that after L2L training a network predicts images with significantly increased contrast of a target structure, which is further improved after implementing a multiscale structural similarity loss function. Here, our results suggest that sample differences in the training data decrease hallucination effects that are observed with other methods. We further assess the performance of a cycle generative adversarial network, and show that a CNN can be trained to separate structures in superposed immunofluorescence images of two targets.

Keywords: Antibody labelling; Cellular structures; Content-aware image restoration; Convolutional neural networks; Fluorescence microscopy; Noise2noise.

PubMed Disclaimer

Conflict of interest statement

Competing interests The authors declare no competing or financial interests.

Figures

Fig. 1.
Fig. 1.
Qualitative L2L and N2N results for images of actin. (A) Confocal image pair of a fixed HeLa cell that was dual labelled with the anti-β-actin antibody AC-15 and a phalloidin stain, which was excluded from the CNN training. Scale bar: 20 µm. (B) Reconstructed image of AC-15 by a CNN after L2L training with images of AC-15/phalloidin as training input/benchmark, using an L3S-SSIM loss function. (C) Original and processed images of AC-15 for two ROIs (6 µm×6 µm). From left to right: raw image, restored images after N2N and L2L training with an L1 and L3S-SSIM loss function, respectively, and a 20-frame average. (D) The corresponding image of phalloidin and the RMS map between the raw and the predicted image of AC-15 by the network after L2L training.
Fig. 2.
Fig. 2.
Loss function-dependent L2L and N2N results for images of the microtubule network and caveolae. (A-D) Confocal images of MeT5A cells that were dual labelled with the anti-tubulin antibodies DM1A (A) and YOL1/34 (B), and STED images of MeT5A cells that were dual labelled with the anti-CAVIN-1 antibody D1P6W (C) and the anti-CAV1 antibody 4H312 (D). (A,C) From left to right: raw image of a representative training input, reconstructed images after N2N and L2L training with an L1 or L3S-SSIM loss function, and a corresponding 20-frame average or high resolution STED image. (B,D) Representative training benchmarks for L2L training are displayed. Images shown were excluded from the network training. Scale bars: 1 µm (A,B); 200 nm (C,D).
Fig. 3.
Fig. 3.
Network architecture-dependent L2L results for images of PXN. Confocal images of a HeLa cell dual labelled with the two anti-PXN antibodies 5H11 and Y113, which were used as training input and benchmark for L2L training. (A) From left to right: raw image of 5H11, the restored images by a CNN/CycleGAN after L2L training with paired/unpaired images, and the corresponding image of Y113 (B). Scale bar: 20 µm. (C) Training results for two ROIs (6 µm×6 µm). From left to right: input (5H11), the restored images by a CNN after N2N and L2L training with an L1 loss function, the restored image by a CycleGAN, and a 20-frame average. (D) Corresponding benchmark images (Y113) for L2L training and its predictions by a CNN after L2L training as outlined above. A CNN that was trained with L2L data in-paints focal adhesions (see white arrowheads) and reduces cytosolic protein (see ROI number 2) for both the training input and benchmark. Images shown were excluded from the network training.
Fig. 4.
Fig. 4.
Qualitative results after training a CNN to separate cellular structures in superposed images of a nuclear stain and an antibody against a plasma membrane protein. (A) Training input and benchmark images of a MeT5A cell that was dual labelled with the nuclear stain SYTOX Green and an anti-CD44 antibody, and corresponding reconstructions after training a CNN with an L3S-SSIM. Scale bar: 10 µm. The image pairs were obtained via sequential imaging by changing the excitation wavelength. (B) Qualitative result for an ROI (5 µm×5 µm). Prediction success is dependent on the level of superposition of both labels. Structures appear slightly blurry in the restorations compared to the benchmark, but image noise and jitter are reduced. The images shown were excluded from the training.
Fig. 5.
Fig. 5.
Repeated cross validation for L2L training. The mean relative change (input/benchmark versus restoration/benchmark) of the NRMSE and 5S-SSIM index after L2L training with image pairs of different cellular structures. Boxes represent 25th and 75th percentiles with median, whiskers represent standard deviations. The image pairs for the trainings were generated of cells that were dual labelled for the actin cytoskeleton (Ntot=68), tubulin (Ntot=51), caveolae (Ntot=60) or PXN (Ntot=77), dependent on the number of raw image pairs that were randomly selected from the total dataset for the cross validation. Each data point is the mean value for an eightfold (actin and PXN) or tenfold (tubulin and caveolae) cross validation that was repeated for small image pair numbers.

References

    1. Bates, M., Huang, B., Dempsey, G. T. and Zhuang, X. (2007). Multicolor super-resolution imaging with photo-switchable fluorescent probes. Science 317, 1749-1753. 10.1126/science.1146598 - DOI - PMC - PubMed
    1. Belthangady, C. and Royer, L. A. (2019). Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction. Nat. Methods 16, 1215-1225. 10.1038/s41592-019-0458-z - DOI - PubMed
    1. Bradski, G. (2000). The OpenCV library. Dr. Dobb's J. Software Tools 120, 122-125.
    1. Caicedo, J. C., Roth, J., Goodman, A., Becker, T., Karhohs, K. W., Broisin, M., Molnar, C., Mcquin, C., Singh, S., Theis, F. J.et al. (2019). Evaluation of deep learning strategies for nucleus segmentation in fluorescence images. Cytometry Part A 95, 952-965. 10.1002/cyto.a.23863 - DOI - PMC - PubMed
    1. Christiansen, E. M., Yang, S. J., Ando, D. M., Javaherian, A., Skibinski, G., Lipnick, S., Mount, E., O'Neil, A., Shah, K., Lee, A. K.et al. (2018). In silico labeling: predicting fluorescent labels in unlabeled images. Cell 173, 792-803. 10.1016/j.cell.2018.03.040 - DOI - PMC - PubMed

Publication types

MeSH terms