Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2023 Dec:85:102271.
doi: 10.1016/j.ceb.2023.102271. Epub 2023 Oct 27.

Live-cell imaging in the deep learning era

Affiliations
Review

Live-cell imaging in the deep learning era

Joanna W Pylvänäinen et al. Curr Opin Cell Biol. 2023 Dec.

Abstract

Live imaging is a powerful tool, enabling scientists to observe living organisms in real time. In particular, when combined with fluorescence microscopy, live imaging allows the monitoring of cellular components with high sensitivity and specificity. Yet, due to critical challenges (i.e., drift, phototoxicity, dataset size), implementing live imaging and analyzing the resulting datasets is rarely straightforward. Over the past years, the development of bioimage analysis tools, including deep learning, is changing how we perform live imaging. Here we briefly cover important computational methods aiding live imaging and carrying out key tasks such as drift correction, denoising, super-resolution imaging, artificial labeling, tracking, and time series analysis. We also cover recent advances in self-driving microscopy.

PubMed Disclaimer

Conflict of interest statement

Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Figures

Figure 1
Figure 1. Live-cell imaging main challenges and computational solutions.
(a) Live fluorescence imaging presents unique challenges that require a careful balance between managing light sensitivity and ensuring optimal spatial, temporal, and spectral resolution to observe intended biological phenomena accurately. Upon data acquisition, researchers need to select the most effective methods to derive biological insights from their video, with strategies spanning from manual analysis to turn-key solutions or custom-developed analysis pipelines. Each approach has strengths and limitations, particularly throughput, speed and accuracy. This figure, illustrated as spider plots, underscores the need for trade-offs in acquiring and analyzing live imaging data. (b) Computational tools designed to handle live cell imaging datasets can be primarily divided into two categories: (i) tools that improve live cell imaging data and mitigate phototoxicity and (ii) tools that facilitate data extraction and analysis. The former category includes methods for drift correction, denoising, resolution enhancement, and artificial labeling. The latter encompasses segmentation, object detection, and tracking tools, followed by time series analysis. Integrating these tools into microscope acquisition software to autonomously control microscope acquisition parameters paves the way for self-driving microscopes. The tool categories are displayed in no particular order, as their use depends on the datasets and needs. The central arrow illustrates that self-driving microscopes can dynamically utilize these approaches to control microscope acquisition parameters.
Figure 2
Figure 2. Deep learning and video analysis.
(a) The DL pipeline. A DL model must first be trained using a training dataset. This step is generally time-consuming and takes hours to weeks, depending on the size of the training dataset. Once trained, a model can be directly applied to other images and generate predictions. This second step is generally much faster (seconds to minutes). (b) Type of training datasets. In a supervised training fashion, a collection of representative input images, each coupled with their anticipated results (i.e., the ground truth), is given to the DNN. Here, the training dataset includes matching pairs of noisy and high signal-to-noise ratio images. Alternate training methods include unsupervised training, where the model is trained with inputs and outputs not necessarily from the same field of view, and self-supervised training, where paired datasets are generated solely from the input images. (c) DL and data dimensions. Live cell imaging datasets can have multiple dimensions. Given that DL tools for bioimage analysis are typically designed to handle up to three dimensions, applying these tools to video processing necessitates varied strategies, contingent on the number of dimensions present in the data for processing. Here a 2D model represent a model capable to process 2D data. A 3D model is capable to process 3D data. The microscopy images displayed for all panels are breast cancer cells labeled with silicon rhodamine DNA to visualize the nuclei and imaged using a spinning disk confocal microscope.
Figure 3
Figure 3. Example of computational tools that can improve live cell imaging movies.
This figure illustrates the power and versatility of computational tools in enhancing the quality, resolution, and content of various types of microscopy images. (a) Time projection of drifting live images of nuclei, captured by a widefield microscope, corrected using Fast4DReg [10]. The color gradient, transitioning from purple (first frame) to white (last frame), denotes the temporal progression—scale bar: 50 μm. (b) A cancer cell in the mouse lung vasculature, in motion and imaged via an Airyscan confocal microscope, is displayed through a maximum-intensity projection. Channel misalignment has been corrected using Fast4DReg [10]—scale bar: 10 μm. (c) Noisy images of nuclei, acquired using a spinning disk confocal microscope, were denoised using a CARE 2D model ([17], as described in Ref. [9])—scale bar: 50 μm. (d) Breast cancer cells labeled with lifeact-RFP were imaged live using 3D SIM. Images were restored using a CARE 3D model ([17], as described in Ref. [19])—scale bar: 5 μm. (e) Cells labeled with Lifeact were imaged using a widefield microscope [45]. The increased image resolution was achieved using the DFCAN deep learning network (as described in Ref. [33])—scale bar: 5 μm. (f) This illustration showcases how a DL network like CAFI can enrich the temporal resolution of a live cell imaging dataset through smart interpolations [39]. (g) Brightfield microscopy was used to image migrating breast cancer cells, and the nuclei image was digitally generated from the brightfield image using a Pix2pix model [46]—scale bar: 100 μm. (h) Breast cancer cells labeled with lifeact-RFP were imaged using a spinning disk confocal. The nuclei image was digitally generated from the lifeact image using a Pix2pix model ([46], as described in Ref. [19])—scale bar: 100 μm.
Figure 4
Figure 4. Extracting temporal information from live imaging data.
(a) Widefield fluorescence microscopy was used to image breast cancer cells expressing a GFP-tagged ERK-reporter (dataset described in Ref. [48]). The cytoplasm was segmented using a custom CellPose model [83], and cell movements were tracked with CellPose in TrackMate [48]. Changes in cell area over time were plotted using PlotTwist [70]—scale bar: 50 μm. (b) Lifeact-RFP-expressing cancer cells were recorded using a spinning disk confocal microscope. Dynamic changes are visualized in a single image using a time projection (purple to white) and a kymograph along a defined line—scale bar: 50 μm. (c) Cancer cell spheroids were imaged at low resolution using an incubator microscope. After segmentation and tracking, the phenotypic state classification of the spheroids, as well as the visualization of the phenotypic space, was enabled by a data-driven time-series analysis focusing on cell shape, size, and movement (figure panel adapted from Ref. [66], only the font size and image sizes were changed in respect to the original figure). (d) Self-driving microscopy provides real-time feedback during image acquisition. Analyzed on the fly, the acquired data enables adjusting microscope settings and acquisition parameters, optimizing data collection.

References

    1. Icha J, Weber M, Waters JC, Norden C. Phototoxicity in live fluorescence microscopy, and how to avoid it. Bioessays. 2017;39:1700003. - PubMed
    1. Schmidt R, Weihs T, Wurm CA, Jansen I, Rehman J, Sahl SJ, Hell SW. MINFLUX nanometer-scale 3D imaging and microsecond-range tracking on a common fluorescence microscope. Nat Commun. 2021;12:1478. doi: 10.1038/s41467-021-21652-z. - DOI - PMC - PubMed
    1. Castello M, Tortarolo G, Buttafava M, Deguchi T, Villa F, Koho S, Pesce L, Oneto M, Pelicci S, Lanzanó L, et al. A robust and versatile platform for image scanning microscopy enabling super-resolution FLIM. Nat Methods. 2019;16:175–178. - PubMed
    1. Zhao Y, Zhang M, Zhang W, Zhou Y, Chen L, Liu Q, Wang P, Chen R, Duan X, Chen F, et al. Isotropic super-resolution light-sheet microscopy of dynamic intracellular structures at subsecond timescales. Nat Methods. 2022;19:359–369. - PubMed
    1. Daetwyler S, Fiolka RP. Light-sheets and smart microscopy, an exciting future is dawning. Commun Biol. 2023;6:1–11. doi: 10.1038/s42003-023-04857-4. - DOI - PMC - PubMed

Publication types

MeSH terms

LinkOut - more resources