Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Jul:134:104501.
doi: 10.1016/j.compbiomed.2021.104501. Epub 2021 May 31.

ASIST: Annotation-free synthetic instance segmentation and tracking by adversarial simulations

Affiliations

ASIST: Annotation-free synthetic instance segmentation and tracking by adversarial simulations

Quan Liu et al. Comput Biol Med. 2021 Jul.

Abstract

Background: The quantitative analysis of microscope videos often requires instance segmentation and tracking of cellular and subcellular objects. The traditional method consists of two stages: (1) performing instance object segmentation of each frame, and (2) associating objects frame-by-frame. Recently, pixel-embedding-based deep learning approaches these two steps simultaneously as a single stage holistic solution. Pixel-embedding-based learning forces similar feature representation of pixels from the same object, while maximizing the difference of feature representations from different objects. However, such deep learning methods require consistent annotations not only spatially (for segmentation), but also temporally (for tracking). In computer vision, annotated training data with consistent segmentation and tracking is resource intensive, the severity of which is multiplied in microscopy imaging due to (1) dense objects (e.g., overlapping or touching), and (2) high dynamics (e.g., irregular motion and mitosis). Adversarial simulations have provided successful solutions to alleviate the lack of such annotations in dynamics scenes in computer vision, such as using simulated environments (e.g., computer games) to train real-world self-driving systems.

Methods: In this paper, we propose an annotation-free synthetic instance segmentation and tracking (ASIST) method with adversarial simulation and single-stage pixel-embedding based learning.

Contribution: The contribution of this paper is three-fold: (1) the proposed method aggregates adversarial simulations and single-stage pixel-embedding based deep learning (2) the method is assessed with both the cellular (i.e., HeLa cells); and subcellular (i.e., microvilli) objects; and (3) to the best of our knowledge, this is the first study to explore annotation-free instance segmentation and tracking study for microscope videos.

Results: The ASIST method achieved an important step forward, when compared with fully supervised approaches: ASIST shows 7%-11% higher segmentation, detection and tracking performance on microvilli relative to fully supervised methods, and comparable performance on Hela cell videos.

Keywords: Annotation free; Cellular; Segmentation; Subcelluar; Tracking.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
The upper panel shows the existing pixel-embedding deep learning based single-stage instance segmentation and tracking method, which is trained by real microscope video and manual annotations. The lower panel presents our pro-posed annotation-free ASIST method, with synthesized data and annotations from adversarial simulations.
Fig. 2.
Fig. 2.
Real and synthetic video of Hela cell and microvilli consisting of three aspects: shape, appearance and dynamics. The “shape” is defined as the underlying shape of the manual annotations. The “appearance” is defined by the various appearances of objects. The “dynamics” indicates the mitigation of cellular and subcellular objects.
Fig. 3.
Fig. 3.
This figure shows the proposed ASIST method. First, CycleGAN based image-annotation synthesis is trained using real microscope images and simulated annotations. Second, synthesized microscope videos are generated from simulated annotation videos. Last, an embedding based instance segmentation and tracking algorithm is trained using synthetic training data. For HeLa cell videos, a new annotation refinement step is introduced to capture the larger shape variations.
Fig. 4.
Fig. 4.
The left panel shows real microscope videos as well as manual annotations. The right panel presents our synthetic videos and simulated annotations.
Fig. 5.
Fig. 5.
The upper panel shows the CycleGAN that is trained by real images and simulated annotations with Gaussian blurring. The lower panel shows the CycleGAN that is trained by the same data without Gaussian blurring. The Generator B is used to generate synthetic videos with larger shape variations from circle representations, while the Generator A* generate sharp segmentation for the annotation registrations.
Fig. 6.
Fig. 6.
This figures shows the workflow of the annotation refinement approach. The simulated circle annotations are fed into Generator B to synthesize cell images. We used Generator A* in Fig. 5 to generate sharp binary masks from synthetic images. Then, we registered simulated circle annotations to binary masks to match the shape of cells in synthetic images. Last, an annotation cleaning step was introduced to delete the inconsistent annotations between deformed instance object masks and binary masks.
Fig. 7.
Fig. 7.
This figure shows the instance segmentation and tracking results of the real testing microvilli video.
Fig. 8.
Fig. 8.
This figure shows the instance segmentation and tracking results on the real HeLa cell testing video.

References

    1. Meenderink LM, Gaeta IM, Postema MM, Cencer CS, Chinowsky CR, Krystofiak ES, Millis BA, Tyska MJ, Actin dynamics drive microvillar motility and clustering during brush border assembly, Dev. Cell 50 (5) (2019) 545–556. - PMC - PubMed
    1. Arbelle A, Reyes J, Chen J-Y, Lahav G, Raviv TR, A probabilistic approach to joint cell tracking and segmentation in high-throughput microscopy videos, Med. Image Anal. 47 (2018) 140–152. - PMC - PubMed
    1. Al-Kofahi Y, Zaltsman A, Graves R, Marshall W, Rusu M, A deep learning-based algorithm for 2-d cell segmentation in microscopy images, BMC Bioinf 19 (1) (2018) 1–11. - PMC - PubMed
    1. Korfhage N, Mühling M, Ringshandl S, Becker A, Schmeck B, Freisleben B, Detection and segmentation of morphologically complex eukaryotic cells in fluorescence microscopy images via feature pyramid fusion, PLoS Comput. Biol. 16 (9) (2020), e1008179. - PMC - PubMed
    1. Van Valen DA, Kudo T, Lane KM, Macklin DN, Quach NT, DeFelice MM, Maayan I, Tanouchi Y, Ashley EA, Covert MW, Deep learning automates the quantitative analysis of individual cells in live-cell imaging experiments, PLoS Comput. Biol. 12 (11) (2016), e1005177. - PMC - PubMed

Publication types