Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Dec 8;15(12):e0243219.
doi: 10.1371/journal.pone.0243219. eCollection 2020.

Cell segmentation and tracking using CNN-based distance predictions and a graph-based matching strategy

Affiliations

Cell segmentation and tracking using CNN-based distance predictions and a graph-based matching strategy

Tim Scherr et al. PLoS One. .

Abstract

The accurate segmentation and tracking of cells in microscopy image sequences is an important task in biomedical research, e.g., for studying the development of tissues, organs or entire organisms. However, the segmentation of touching cells in images with a low signal-to-noise-ratio is still a challenging problem. In this paper, we present a method for the segmentation of touching cells in microscopy images. By using a novel representation of cell borders, inspired by distance maps, our method is capable to utilize not only touching cells but also close cells in the training process. Furthermore, this representation is notably robust to annotation errors and shows promising results for the segmentation of microscopy images containing in the training data underrepresented or not included cell types. For the prediction of the proposed neighbor distances, an adapted U-Net convolutional neural network (CNN) with two decoder paths is used. In addition, we adapt a graph-based cell tracking algorithm to evaluate our proposed method on the task of cell tracking. The adapted tracking algorithm includes a movement estimation in the cost function to re-link tracks with missing segmentation masks over a short sequence of frames. Our combined tracking by detection method has proven its potential in the IEEE ISBI 2020 Cell Tracking Challenge (http://celltrackingchallenge.net/) where we achieved as team KIT-Sch-GE multiple top three rankings including two top performances using a single segmentation model for the diverse data sets.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Training data representations for the training of deep learning models.
Image (a) and ground truth (b) show a crop of the simulated Cell Tracking Challenge data set Fluo-N2DH-SIM+ [7, 9]. Generated boundaries (c) and borders (d) can be used to split touching cells. Many training data sets contain only few touching cells resulting in few training samples for borders and boundaries between cells. The combination of cell distances (e) with neighbor distances (f) is aimed to solve this problem since models can also learn from close cells.
Fig 2
Fig 2. Overview of the proposed segmentation method using distance predictions (adapted from [13]).
The CNN consists of a single encoder that is connected to both decoder paths. The network is trained to predict cell distances and neighbor distances that are used for the watershed-based post-processing. The input image shows a crop of the Cell Tracking Challenge data set Fluo-N2DH-GOWT1 [7, 9].
Fig 3
Fig 3. Main steps of the neighbor distance creation.
After the automated selection of a cell (a), indicated with red, the selected cell and the background are converted to foreground (white in b) while the other cells are converted to background (black in b). Then, the distance transform is calculated (c), cut to the cell region and normalized (d). After inversion (e), the steps are repeated for the remaining cells (f). Finally, the grayscale closed neighbor distances (g) are scaled (h). Shown is a crop of the Broad Bioimage Benchmark Collection data set BBBC039v1 [26].
Fig 4
Fig 4. Robustness of training data representations to annotation inconsistencies.
Small changes in the ground truth, simulated with morphological erosions and dilations, result in different boundaries and borders (first and second row). The difference images between the first row and the second row show that the changes for the distance labels are smoother. Shown is a crop of the Cell Tracking Challenge data set Fluo-N2DH-SIM+ [7, 9].
Fig 5
Fig 5. Overview of the watershed post-processing for segmentation.
The post-processing consists of a threshold-based seed extraction and mask creation, and a watershed. The predictions show a 2D crop of the Cell Tracking Challenge data set Fluo-N3DL-TRIC [7, 9].
Fig 6
Fig 6. Graph construction steps exemplary for four segmented objects.
Edges added in a construction step are black, edges added in previous steps are gray. The gray nodes (O) correspond to segmented objects. The segmented objects from {t − Δt, …, t} are the last matched objects of all active tracks, whereas the segmented objects of t + 1 are not matched to tracks yet. The blue node models the appearance of objects (A), the red node the disappearance of objects (D), and the green node split events (S). Split event nodes (S) are added for each pair of objects at t + 1. Therefore, a split event node (S) has exactly two outgoing edges but can have several ingoing edges from object nodes (O). Source (S) and sink nodes (S+) are added for the formulation as coupled minimum cost flow problem.
Fig 7
Fig 7. Cell tracking challenge data set structure.
Since no ground truths are publicly available for the challenge sets, the two provided training sets need to be split into a set used for training and a set for evaluation.
Fig 8
Fig 8. Segmentation results on the BF-C2DL-HSC test set.
Shown are raw predictions and segmentations of a 140 px×140 px test image crop (a-g, best OPCSB models). For multi-channel outputs, channels are color-coded (cell/seed class: white, boundary/border/touching class: red, gap class: blue). The plot at the bottom shows the evaluation on the test set (h).
Fig 9
Fig 9. Segmentation results on the Fluo-N3DH-CE test set.
Shown are raw predictions and segmentations of a 140 px×140 px test image crop (a-g, best OPCSB models). For multi-channel outputs, channels are color-coded (cell/seed class: white, boundary/border/touching class: red, gap class: blue). The plot at the bottom shows the evaluation on the test set (h). Note: this is a 3D data set and the erroneous merging of cells can result from any of the slices a cell appears.
Fig 10
Fig 10. Segmentation results on the Fluo-N2DL-HeLa test set.
Shown are raw predictions and segmentations of a 140 px×140 px test image crop (a-g, best OPCSB models). For multi-channel outputs, channels are color-coded (cell/seed class: white, boundary/border/touching class: red, gap class: blue). The plot at the bottom shows the evaluation on the test set (h).
Fig 11
Fig 11. Segmentation results on the BF-C2DL-MuSC test set.
Shown are raw predictions and segmentations of a 360 px×360 px test image crop (a-g, best OPCSB models). For multi-channel outputs, channels are color-coded (cell/seed class: white, boundary/border/touching class: red, gap class: blue). The plot at the bottom shows the evaluation on the test set (h).
Fig 12
Fig 12. Segmentation result of the Fluo-N3DH-CE challenge data.
The maximum intensity projection of the raw data (left) and of the segmentation (right) show that cells can be segmented well even on this challenging data set. S1 Video shows a video of a tracked developing embryo.
Fig 13
Fig 13. Tracking results on the Fluo-N2DL-HeLa challenge data.
The first raw image is overlaid with the tracks starting in the first frame. For better visibility, tracks starting in later frames are excluded. S2 Video shows a video of the tracked cells.

Similar articles

Cited by

References

    1. Chhetri RK, Amat F, Wan Y, Höckendorf B, Lemon WC, Keller PJ. Whole-animal functional and developmental imaging with isotropic spatial resolution. Nat Methods. 2015;12:1171–1178. 10.1038/nmeth.3632 - DOI - PubMed
    1. Kobitski AY, Otte JC, Takamiya M, Schäfer B, Mertes J, Stegmaier J, et al. An ensemble-averaged, cell density-based digital model of zebrafish embryo development derived from light-sheet microscopy data with single-cell resolution. Sci Rep. 2015;5(8601):1–10. 10.1038/srep08601 - DOI - PMC - PubMed
    1. Khairy K, Keller PJ. Reconstructing embryonic development. Genesis. 2011;49:488–513. 10.1002/dvg.20698 - DOI - PubMed
    1. Caicedo JC, Goodman A, Karhohs KW, Cimini BA, Ackerman J, Haghighi M, et al. Nucleus segmentation across imaging experiments: the 2018 Data Science Bowl. Nat Methods. 2019;16:1247–1253. 10.1038/s41592-019-0612-7 - DOI - PMC - PubMed
    1. Schott B, Traub M, Schlagenhauf C, Takamiya M, Antritter T, Bartschat A, et al. EmbryoMiner: a new framework for interactive knowledge discovery in large-scale cell tracking data of developing embryos. PLOS Comput Biol. 2018;14(4):1–18. 10.1371/journal.pcbi.1006128 - DOI - PMC - PubMed

Publication types