SmartDetector: Automatic and vision-based approach to point-light display generation for human action perception
- PMID: 39138735
- DOI: 10.3758/s13428-024-02478-1
SmartDetector: Automatic and vision-based approach to point-light display generation for human action perception
Abstract
Over the past four decades, point-light displays (PLD) have been integrated into psychology and psychophysics, providing a valuable means to probe human perceptual skills. Leveraging the inherent kinematic information and controllable display parameters, researchers have utilized this technique to examine the mechanisms involved in learning and rehabilitation. However, classical PLD generation methods (e.g., motion capture) are difficult to apply for behavior analysis in real-world situations, such as patient care or sports activities. Therefore, there is a demand for automated and affordable tools that enable efficient and real-world-compatible generation of PLDs for psychological research. In this paper, we propose SmartDetector, a new artificial intelligence (AI)-based tool for automatic PLD creation from RGB videos. To evaluate humans' perceptual skills for processing PLD building with SmartDetector, 126 participants were randomly assigned to recognition, discrimination, or detection tasks. Results demonstrated that, irrespective of the task, PLDs generated by SmartDetector exhibited commendable perceptual performance in terms of accuracy and response times compared to literature findings. Moreover, to enhance usability and broaden accessibility, we developed an intuitive web interface for our method, making it available to a wider audience. The resulting application is available at https://plavimop.prd.fr/index.php/en/automatic-creation-pld . SmartDetector offers interesting possibilities for using PLD in research and makes the use of PLD more accessible for nonacademic applications.
Keywords: Artificial intelligence; Biological movements; Deep learning; Perception; Point-light display; Psychology; Scrambled movements.
© 2024. The Psychonomic Society, Inc.
References
-
- Alaerts, K., Nackaerts, E., Meyns, P., Swinnen, S. P., & Wenderoth, N. (2011). Action and emotion recognition from point light displays: An investigation of gender differences. PLoS ONE, 6(6), e20989. https://doi.org/10.1371/journal.pone.0020989 - DOI - PubMed - PMC
-
- Albert, J. A., Owolabi, V., Gebel, A., Brahms, C. M., Granacher, U., & Arnrich, B. (2020). Evaluation of the Pose Tracking Performance of the Azure Kinect and Kinect v2 for Gait Analysis in Comparison with a Gold Standard: A Pilot Study. Sensors, 20(18), Article 18. https://doi.org/10.3390/s20185104
-
- Avola, D., Cascio, M., Cinque, L., Foresti, G. L., Massaroni, C., & Rodolà, E. (2020). 2-D Skeleton-Based Action Recognition via Two-Branch Stacked LSTM-RNNs. IEEE Transactions on Multimedia, 22(10), 2481–2496. IEEE Transactions on Multimedia. https://doi.org/10.1109/TMM.2019.2960588
-
- Bazarevsky, V., Grishchenko, I., Raveendran, K., Zhu, T., Zhang, F., & Grundmann, M. (2020). BlazePose: On-device Real-time Body Pose tracking ( arXiv:2006.10204 ). arXiv. https://doi.org/10.48550/arXiv.2006.10204
-
- Beardsworth, T., & Buckner, T. (1981). The ability to recognize oneself from a video recording of one’s movements without seeing one’s body. Bulletin of the Psychonomic Society, 18(1), 19–22. - DOI
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
Research Materials
Miscellaneous