Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Mar 10;23(6):2998.
doi: 10.3390/s23062998.

Learning-Based Motion-Intention Prediction for End-Point Control of Upper-Limb-Assistive Robots

Affiliations

Learning-Based Motion-Intention Prediction for End-Point Control of Upper-Limb-Assistive Robots

Sibo Yang et al. Sensors (Basel). .

Abstract

The lack of intuitive and active human-robot interaction makes it difficult to use upper-limb-assistive devices. In this paper, we propose a novel learning-based controller that intuitively uses onset motion to predict the desired end-point position for an assistive robot. A multi-modal sensing system comprising inertial measurement units (IMUs), electromyographic (EMG) sensors, and mechanomyography (MMG) sensors was implemented. This system was used to acquire kinematic and physiological signals during reaching and placing tasks performed by five healthy subjects. The onset motion data of each motion trial were extracted to input into traditional regression models and deep learning models for training and testing. The models can predict the position of the hand in planar space, which is the reference position for low-level position controllers. The results show that using IMU sensor with the proposed prediction model is sufficient for motion intention detection, which can provide almost the same prediction performance compared with adding EMG or MMG. Additionally, recurrent neural network (RNN)-based models can predict target positions over a short onset time window for reaching motions and are suitable for predicting targets over a longer horizon for placing tasks. This study's detailed analysis can improve the usability of the assistive/rehabilitation robots.

Keywords: human–robot interaction; machine learning; motion intention detection; sensory fusion; upper limb assistive robots; wearable sensors.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Figures

Figure 1
Figure 1
Overview of the proposed learning-based controller. The onset motion data are acquired from different sensory interfaces. The input features are then computed for four onset time windows of motion. Then, the processed motion samples are given as input to multiple learning-based intention prediction models for end-point prediction. Finally, the target end-point positions will be given as a reference point to the low-level controller for the robot to move.
Figure 2
Figure 2
Experimental setup: The subject sat in front of the experimental table. Then, he moved the cup according to the number of the positions indicated on the screen of a laptop.
Figure 3
Figure 3
Schematic of the experimental table: Each red position is made up of a 4 cm radius circular. Twelve magnetic stickers are placed in different locations.
Figure 4
Figure 4
Data acquisition flow: The IMUs and the EMG and MMG sensors were placed on the upper limb of the participant. The EMG and IMU data were transmitted wirelessly, and the MMG data were captured via a wired connection. A DAQ acquires the data (Quanser PIDe) connected to a PC.
Figure 5
Figure 5
The input features are calculated within the whole time window of onset movement. The motion data of each subject can be reformed as a single vector.
Figure 6
Figure 6
The settled margin of radius error for object’s location. The largest range was chosen to be coincident with the palm-to-wrist length of the 99% of adults, corresponding to 8 cm [42].
Figure 7
Figure 7
Prediction accuracy of the “reaching” task for different combinations of input features considering all subjects. The performance of the analysis mainly targets the LSTM model. The blue bars represent the results that only use kinematic input features labeled as IMU. The orange bars refer to the physiological features input only, labeled as EMG + MMG. Results with the combinations of input features are shown in green (labeled as all).
Figure 8
Figure 8
Prediction accuracy of the “placing task” for different combinations of input features for all participants subjects. The LSTM model is the main model to be analyzed in this section. The blue bars represent the results that only use kinematic input features labeled as IMU. The orange bars refer to the physiological features input only, labeled as EMG + MMG. Results with the combinations of input features are shown in green (labeled as all).

References

    1. Mayo N.E., Wood-Dauphinee S., Côte R., Durcan L., Carlton J. Activity, participation, and quality of life 6 months poststroke. Arch. Phys. Med. Rehabil. 2002;83:1035–1042. doi: 10.1053/apmr.2002.33984. - DOI - PubMed
    1. Bos R.A., Haarman C.J., Stortelder T., Nizamis K., Herder J.L., Stienen A.H., Plettenburg D.H. A structured overview of trends and technologies used in dynamic hand orthoses. J. Neuroeng. Rehabil. 2016;13:1–25. doi: 10.1186/s12984-016-0168-z. - DOI - PMC - PubMed
    1. Perry J.C., Rosen J., Burns S. Upper-limb powered exoskeleton design. IEEE/ASME Trans. Mechatron. 2007;12:408–417. doi: 10.1109/TMECH.2007.901934. - DOI
    1. Johannes M.S., Bigelow J.D., Burck J.M., Harshbarger S.D., Kozlowski M.V., Van Doren T. An overview of the developmental process for the modular prosthetic limb. Johns Hopkins APL Tech. Dig. 2011;30:207–216.
    1. Fiorini L., De Mul M., Fabbricotti I., Limosani R., Vitanza A., D’Onofrio G., Tsui M., Sancarlo D., Giuliani F., Greco A., et al. Assistive robots to improve the independent living of older persons: Results from a needs study. Disabil. Rehabil. Assist. Technol. 2021;16:92–102. doi: 10.1080/17483107.2019.1642392. - DOI - PubMed

LinkOut - more resources