Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Mar 30;20(7):1903.
doi: 10.3390/s20071903.

Deep Learning-Based Upper Limb Functional Assessment Using a Single Kinect v2 Sensor

Affiliations

Deep Learning-Based Upper Limb Functional Assessment Using a Single Kinect v2 Sensor

Ye Ma et al. Sensors (Basel). .

Abstract

We develop a deep learning refined kinematic model for accurately assessing upper limb joint angles using a single Kinect v2 sensor. We train a long short-term memory recurrent neural network using a supervised machine learning architecture to compensate for the systematic error of the Kinect kinematic model, taking a marker-based three-dimensional motion capture system (3DMC) as the golden standard. A series of upper limb functional task experiments were conducted, namely hand to the contralateral shoulder, hand to mouth or drinking, combing hair, and hand to back pocket. Our deep learning-based model significantly improves the performance of a single Kinect v2 sensor for all investigated upper limb joint angles across all functional tasks. Using a single Kinect v2 sensor, our deep learning-based model could measure shoulder and elbow flexion/extension waveforms with mean CMCs >0.93 for all tasks, shoulder adduction/abduction, and internal/external rotation waveforms with mean CMCs >0.8 for most of the tasks. The mean deviations of angles at the point of target achieved and range of motion are under 5° for all investigated joint angles during all functional tasks. Compared with the 3DMC, our presented system is easier to operate and needs less laboratory space.

Keywords: Kinect; deep learning; kinematics; recurrent neural network; upper limb functional assessment.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
The architecture of our deep learning refined kinematic model for Kinect v2.
Figure 2
Figure 2
The kinematic models of the Kinect v2 system and the 3D motion capture system.
Figure 3
Figure 3
Illustration of the Kinect skeleton joints and the anatomical coordination system.
Figure 4
Figure 4
Marker set for the 3DMC system. Left: Arrangement of the UWA upper limb marker set. Right: A participant with the attached markers.
Figure 5
Figure 5
Architecture of our LSTM neural network for upper limb kinematics refinement.
Figure 6
Figure 6
Four upper limb functional tasks evaluated in our study. Left: Hand to the contralateral shoulder. Middle-left: Hand to mouth or drinking. Middle-right: combing hair. Right: Hand to back pocket.
Figure 7
Figure 7
The protocol of the leave one subject out cross validation (LOOCV).
Figure 8
Figure 8
Joint angles during the hand to contra lateral shoulder task calculated via the kinematic model for 3DMC (orange solid line), our deep learning refined kinematic model for Kinect (green dashed line) and the kinematic model for Kinect (blue solid line). The joint angles include shoulder flexion (+)/extension (−), shoulder adduction (+)/abduction (−), shoulder internal rotation (+)/external rotation (−), and elbow flexion (+)/extension (−).
Figure 9
Figure 9
Joint angles during the hand to mouth task calculated via the kinematic model for 3DMC (orange solid line), our deep learning refined kinematic model for Kinect (green dashed line) and the kinematic model for Kinect (blue solid line). The joint angles include shoulder flexion (+)/extension (−), shoulder adduction (+)/abduction (−), shoulder internal rotation (+)/external rotation (−), and elbow flexion (+)/extension (−).
Figure 10
Figure 10
Joint angles during the combing hair task calculated via the kinematic model for 3DMC (orange solid line), our deep learning refined kinematic model for Kinect (green dashed line) and the kinematic model for Kinect (blue solid line). The joint angles include shoulder flexion (+)/extension (−), shoulder adduction (+)/abduction (−), shoulder internal rotation (+)/external rotation (−), and elbow flexion (+)/extension (−).
Figure 11
Figure 11
Joint angles during the hand to back pocket task calculated via the kinematic model for 3DMC (orange solid line), our deep learning refined kinematic model for Kinect (green dashed line) and the kinematic model for Kinect (blue solid line). The joint angles include shoulder flexion (+)/extension (−), shoulder adduction (+)/abduction (−), shoulder internal rotation (+)/external rotation (−), and elbow flexion (+)/extension (−).
Figure 12
Figure 12
Bland-Altman plots with 95% limits of agreement for joint kinematic parameters during the hand to contralateral shoulder task. X axes represents the angle means of two systems and the Y axes represents the mean of differences. The red line (middle one) represents the reference line at mean, and the two dashed lines represent the upper and lower limit of agreement. The upper four rows are the angles at the point of target achieved (PTA) and the lower four rows are the range of motion (ROM) values. Plots of the left column are measurement differences between our deep learning refined kinematic model Φ^ for Kinect and the UWA kinematic model Γ for the 3DMC. Plots of the right column are measurement differences between the kinematic model Φ for Kinect and the UWA kinematic model for the 3DMC Γ.
Figure 13
Figure 13
Bland-Altman plots with 95% limits of agreement for joint kinematic parameters during the hand to mouth task. X axes represents the angle means of two systems and the Y axes represents the mean of differences. The red line (middle one) represents the reference line at mean, and the two dashed lines represent the upper and lower limit of agreement. The upper four rows are the angles at the point of target achieved (PTA) and the lower four rows are the range of motion (ROM) values. Plots of the left column are measurement differences between our deep learning refined kinematic model Φ^ for Kinect and the UWA kinematic model Γ for the 3DMC. Plots of the right column are measurement differences between the kinematic model Φ for Kinect and the UWA kinematic model for the 3DMC Γ.
Figure 14
Figure 14
Bland-Altman plots with 95% limits of agreement for joint kinematic parameters during the combing hair task. X axes represents the angle means of two systems and the Y axes represents the mean of differences. The red line (middle one) represents the reference line at mean, and the two dashed lines represent the upper and lower limit of agreement. The upper four rows are the angles at the point of target achieved (PTA) and the lower four rows are the range of motion (ROM) values. Plots of the left column are measurement differences between our deep learning refined kinematic model Φ^ for Kinect and the UWA kinematic model Γ for the 3DMC. Plots of the right column are measurement differences between the kinematic model Φ for Kinect and the UWA kinematic model for the 3DMC Γ.
Figure 15
Figure 15
Bland-Altman plots with 95% limits of agreement for joint kinematic parameters during the hand to back pocket task. X axes represents the angle means of two systems and the Y axes represents the mean of differences. The red line (middle one) represents the reference line at mean, and the two dashed lines represent the upper and lower limit of agreement. The upper four rows are the angles at the point of target achieved (PTA) and the lower four rows are the range of motion (ROM) values. Plots of the left column are measurement differences between our deep learning refined kinematic model for Kinect and the UWA kinematic model for the 3DMC. Plots of the left column are measurement differences between our deep learning refined kinematic model Φ^ for Kinect and the UWA kinematic model Γ for the 3DMC. Plots of the right column are measurement differences between the kinematic model Φ for Kinect and the UWA kinematic model for the 3DMC Γ.

References

    1. Dounskaia N., Ketcham C.J., Leis B.C., Stelmach G.E. Disruptions in joint control during drawing arm movements in Parkinson’s disease. Exp. Brain Res. 2005;164:311–322. doi: 10.1007/s00221-005-2251-8. - DOI - PubMed
    1. Dounskaia N., Swinnen S., Walter C., Spaepen A., Verschueren S. Hierarchical control of different elbow-wrist coordination patterns. Exp. Brain Res. 1998;121:239–254. doi: 10.1007/s002210050457. - DOI - PubMed
    1. Reid S., Elliott C., Alderson J., Lloyd D., Elliott B. Repeatability of upper limb kinematics for children with and without cerebral palsy. Gait Posture. 2010;32:10–17. doi: 10.1016/j.gaitpost.2010.02.015. - DOI - PubMed
    1. Galna B., Barry G., Jackson D., Mhiripiri D., Olivier P., Rochester L. Accuracy of the Microsoft Kinect sensor for measuring movement in people with Parkinson’s disease. Gait Posture. 2014;39:1062–1068. doi: 10.1016/j.gaitpost.2014.01.008. - DOI - PubMed
    1. Hoy M.G., Zernicke R.F. The role of intersegmental dynamics during rapid limb oscillations. J. Biomech. 1986;19:867–877. doi: 10.1016/0021-9290(86)90137-5. - DOI - PubMed

LinkOut - more resources