Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Oct 1;23(19):8216.
doi: 10.3390/s23198216.

Climbing Technique Evaluation by Means of Skeleton Video Stream Analysis

Affiliations

Climbing Technique Evaluation by Means of Skeleton Video Stream Analysis

Raul Beltrán Beltrán et al. Sensors (Basel). .

Abstract

Due to the growing interest in climbing, increasing importance has been given to research in the field of non-invasive, camera-based motion analysis. While existing work uses invasive technologies such as wearables or modified walls and holds, or focuses on competitive sports, we for the first time present a system that uses video analysis to automatically recognize six movement errors that are typical for novices with limited climbing experience. Climbing a complete route consists of three repetitive climbing phases. Therefore, a characteristic joint arrangement may be detected as an error in a specific climbing phase, while this exact arrangement may not considered to be an error in another climbing phase. That is why we introduced a finite state machine to determine the current phase and to check for errors that commonly occur in the current phase. The transition between the phases depends on which joints are being used. To capture joint movements, we use a fourth-generation iPad Pro with LiDAR to record climbing sequences in which we convert the climber's 2-D skeleton provided by the Vision framework from Apple into 3-D joints using the LiDAR depth information. Thereupon, we introduced a method that derives whether a joint moves or not, determining the current phase. Finally, the 3-D joints are analyzed with respect to defined characteristic joint arrangements to identify possible motion errors. To present the feedback to the climber, we imitate a virtual mentor by realizing an application on the iPad that creates an analysis immediately after the climber has finished the route by pointing out the detected errors and by giving suggestions for improvement. Quantitative tests with three experienced climbers that were able to climb reference routes without any errors and intentionally with errors resulted in precision-recall curves evaluating the error detection performance. The results demonstrate that while the number of false positives is still in an acceptable range, the number of detected errors is sufficient to provide climbing novices with adequate suggestions for improvement. Moreover, our study reveals limitations that mainly originate from incorrect joint localizations caused by the LiDAR sensor range. With human pose estimation becoming increasingly reliable and with the advance of sensor capabilities, these limitations will have a decreasing impact on our system performance.

Keywords: climbing motion analysis; human pose estimation; key point detection; sports and computer science; video analysis.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Proposed climbing phases transition state diagram.
Figure 2
Figure 2
Relevant body joints in climbing analysis. Image source [5].
Figure 3
Figure 3
Extraction of the climber’s pose in a video frame. (a) Skeleton provided by Vision and calculation of the climber’s CoM, the latter highlighted by a red point. (b) Skeleton projection on the point cloud to assign the depth component to each skeleton joint.
Figure 4
Figure 4
Correlation of two different recordings C and C on the same climbing route.
Figure 5
Figure 5
nth-standard-deviation graph with threshold of 50% of its maximum peak Ok. Here, n=2 and Ok=1580 mm/s.
Figure 6
Figure 6
Angles and distances used in the error detection with holding hand (Hh) and supporting hand (Hs). (a) Measured angles for the elbow (φ) and shoulder (ϑ) in the decoupling and shoulder-relaxing errors, respectively. (b) Knee-to-ankle horizontal distance (dknee) in the weight-shift error, and minimum time (thand) for the supporting hand in reaching-hand-supports error. (c) Hip-to-wall depth distance (dhip) in relation to a reference climber for the hip-close-to-the-wall detection, and feet motion frames (JM14,11) for both-feet-set error.
Figure 7
Figure 7
Schematic of the overlap between detection range frames and ground truth for a given climbing error. Here, IoU>0.5 is used as the threshold to distinguish TP from FN.
Figure 8
Figure 8
Precision–recall curves for the six error evaluations.
Figure 9
Figure 9
Application feedback for novice climbers. Different errors are presented per climbing phase, and a summary of the total errors with hints to improve the next attempt. Image source [29]. (a) Errors in reaching phase; (b) error in stabilization phase; (c) error in preparation phase; (d) climbing error summary.

Similar articles

Cited by

References

    1. Ekaireb S., Ali Khan M., Pathuri P., Haresh Bhatia P., Sharma R., Manjunath-Murkal N. Computer Vision Based Indoor Rock Climbing Analysis. 2022. [(accessed on 25 April 2012)]. Available online: https://kastner.ucsd.edu/ryan/wp-content/uploads/sites/5/2022/06/admin/r....
    1. Orth D., Kerr G., Davids K., Seifert L. Analysis of Relations between Spatiotemporal Movement Regulation and Performance of Discrete Actions Reveals Functionality in Skilled Climbing. Front. Psychol. 2017;8:1744. doi: 10.3389/fpsyg.2017.01744. - DOI - PMC - PubMed
    1. Breen M., Reed T., Nishitani Y., Jones M., Breen H.M., Breen M.S. Wearable and Non-Invasive Sensors for Rock Climbing Applications: Science-Based Training and Performance Optimization. Sensors. 2023;23:5080. doi: 10.3390/s23115080. - DOI - PMC - PubMed
    1. Winter S. Klettern & Bouldern: Kletter- und Sicherungstechnik für Einsteiger. Rother Bergverlag; Bavaria, Germany: 2012. pp. 90–91.
    1. Apple Inc. Vision Framework—Apply Computer Vision Algorithms to Perform a Variety of Tasks on Input Images and Video. 2023. [(accessed on 25 April 2012)]. Available online: https://developer.apple.com/documentation/vision.

LinkOut - more resources