Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jul 7;13(1):11021.
doi: 10.1038/s41598-023-38213-7.

WormSwin: Instance segmentation of C. elegans using vision transformer

Affiliations

WormSwin: Instance segmentation of C. elegans using vision transformer

Maurice Deserno et al. Sci Rep. .

Abstract

The possibility to extract motion of a single organism from video recordings at a large-scale provides means for the quantitative study of its behavior, both individual and collective. This task is particularly difficult for organisms that interact with one another, overlap, and occlude parts of their bodies in the recording. Here we propose WormSwin-an approach to extract single animal postures of Caenorhabditis elegans (C. elegans) from recordings of many organisms in a single microscope well. Based on transformer neural network architecture our method segments individual worms across a range of videos and images generated in different labs. Our solutions offers accuracy of 0.990 average precision ([Formula: see text]) and comparable results on the benchmark image dataset BBBC010. Finally, it allows to segment challenging overlapping postures of mating worms with an accuracy sufficient to track the organisms with a simple tracking heuristic. An accurate and efficient method for C. elegans segmentation opens up new opportunities for studying of its behaviors previously inaccessible due to the difficulty in the worm extraction from the video frames.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Figure 1
Figure 1
Example images from the datasets used in this study: (a) synthetic dataset example with added ring, (b) synthetic dataset without ring, (c) BBBC010 dataset example with mostly alive C. elegans, (d) BBBC010 dataset patch with mostly dead C. elegans, (e) mating dataset with petri-dish ring, (f) zoomed-in mating dataset patch with many overlaps.
Figure 2
Figure 2
Network architecture based on Swin-L backbone and HTC. Batch norm (BN) layers in HTC are replaced by group norm (GN) + weight standardization (WS). Bounding box heads are changed from the original Shared2FC architecture to Shared4Conv1FC.
Figure 3
Figure 3
Example from the CSB-1 dataset (box and mask colors are selected randomly). (a) Ground truth annotations, (b) predicted bounding boxes and masks, (c) TP (green), FP and FN (red) pixels.
Figure 4
Figure 4
Results on the Mating Dataset (box and mask colors are selected randomly). (a,c,e,g) Segmentation results, (b,d,f,h) TP (green), FP and FN (red) pixels.
Figure 5
Figure 5
Example of tracked C. elegans.

Similar articles

Cited by

References

    1. Marshall JD, et al. Continuous whole-body 3D kinematic recordings across the rodent behavioral repertoire. Neuron. 2021;109:420–437e8. doi: 10.1016/j.neuron.2020.11.016. - DOI - PMC - PubMed
    1. Gosztolai A, et al. LiftPose3D, a deep learning-based approach for transforming two-dimensional to three-dimensional poses in laboratory animals. Nat. Methods. 2021;18:975–981. doi: 10.1038/s41592-021-01226-z. - DOI - PMC - PubMed
    1. Yemini E, Jucikas T, Grundy LJ, Brown AEX, Schafer WR. A database of Caenorhabditis elegans behavioral phenotypes. Nat. Methods. 2013;10:877–879. doi: 10.1038/nmeth.2560. - DOI - PMC - PubMed
    1. Barlow IL, et al. Megapixel camera arrays enable high-resolution animal tracking in multiwell plates. Commun. Biol. 2022;5:253. doi: 10.1038/s42003-022-03206-1. - DOI - PMC - PubMed
    1. Baek J-H, Cosman P, Feng Z, Silver J, Schafer WR. Using machine vision to analyze and classify Caenorhabditis elegans behavioral phenotypes quantitatively. J. Neurosci. Methods. 2002;118:9–21. doi: 10.1016/S0165-0270(02)00117-6. - DOI - PubMed

Publication types