Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Aug 29;11(8):ENEURO.0304-23.2024.
doi: 10.1523/ENEURO.0304-23.2024. Print 2024 Aug.

KineWheel-DeepLabCut Automated Paw Annotation Using Alternating Stroboscopic UV and White Light Illumination

Affiliations

KineWheel-DeepLabCut Automated Paw Annotation Using Alternating Stroboscopic UV and White Light Illumination

Björn Albrecht et al. eNeuro. .

Abstract

Uncovering the relationships between neural circuits, behavior, and neural dysfunction may require rodent pose tracking. While open-source toolkits such as DeepLabCut have revolutionized markerless pose estimation using deep neural networks, the training process still requires human intervention for annotating key points of interest in video data. To further reduce human labor for neural network training, we developed a method that automatically generates annotated image datasets of rodent paw placement in a laboratory setting. It uses invisible but fluorescent markers that become temporarily visible under UV light. Through stroboscopic alternating illumination, adjacent video frames taken at 720 Hz are either UV or white light illuminated. After color filtering the UV-exposed video frames, the UV markings are identified and the paw locations are deterministically mapped. This paw information is then transferred to automatically annotate paw positions in the next white light-exposed frame that is later used for training the neural network. We demonstrate the effectiveness of our method using a KineWheel-DeepLabCut setup for the markerless tracking of the four paws of a harness-fixed mouse running on top of the transparent wheel with mirror. Our automated approach, made available open-source, achieves high-quality position annotations and significantly reduces the need for human involvement in the neural network training process, paving the way for more efficient and streamlined rodent pose tracking in neuroscience research.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
KineWheel experimental setup. A, Top view with the wheel, the mouse attachment above, and the red tube at level with the wheel surface. From within the transparent wheel, two rows of UV lights illuminate the mouse from underneath. Additional lights were placed above and below the camera on the left. B, Camera view of the KineWheel with mirror inside the top half of the wheel. The surrounding white walls contributed to even illumination and blocked stray laboratory light. C, Control circuit with Arduino Nano and connectors for power supply, the four LED modules, and the camera trigger.
Figure 2.
Figure 2.
Image sequence illuminated with UV (left) or white light (right). Left, Mouse under UV light (colors enhanced for illustrative purposes). Right, Mouse under white light. Bottom, Consecutive original video sequence of alternating UV and white light illumination.
Figure 3.
Figure 3.
System diagram of main components and connections. The Windows PC runs KWA-Controller interfacing with the Arduino Nano, and pylon Viewer for Basler camera control. Camera connected via USB 3.0 and Arduino via USB 2.0.
Figure 4.
Figure 4.
Euclidean distances between predicted key points from a neural network model trained on machine-annotated images (pink circles) and human-annotated images (blue circles), compared with the ground truth key points. Each point represents 1 of 25 randomly selected test images, with the x-axis indicating the test image number (0–24) and the y-axis representing the Euclidean distance in pixels. Two horizontal lines represent the models' mean Euclidean distance across all test images.

References

    1. Baeyer A (1871) Ueber eine neue Klasse von Farbstoffen. Berichte der deutschen chemischen Gesellschaft 4:555–558. 10.1002/cber.18710040209 - DOI
    1. Bernstein NA (1967) The co-ordination and regulation of movements, Vol. 1. Oxford: Pergamon Press.
    1. Burgos-Artizzu XP, Dollár P, Lin D, Anderson DJ, Perona P (2012) Social behavior recognition in continuous video. In 2012 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1322–1329). IEEE. 10.1109/CVPR.2012.6247817 - DOI
    1. Colyer SL, Evans M, Cosker DP, Salo AIT (2018) A review of the evolution of vision-based motion analysis and the integration of advanced computer vision methods towards developing a markerless system. Sports Med Open 4:24. 10.1186/s40798-018-0139-y - DOI - PMC - PubMed
    1. Das K, de Paula Oliveira T, Newell J (2023) Comparison of markerless and marker-based motion capture systems using 95% functional limits of agreement in a linear mixed-effects modelling framework. Sci Rep 13:22880. 10.1038/s41598-023-49360-2 - DOI - PMC - PubMed

LinkOut - more resources