An investigation of multimodal EMG-EEG fusion strategies for upper-limb gesture classification
- PMID: 40480249
- DOI: 10.1088/1741-2552/ade1f9
An investigation of multimodal EMG-EEG fusion strategies for upper-limb gesture classification
Abstract
Objective. Upper-limb gesture identification is an important problem in the advancement of robotic prostheses. Prevailing research into classifying electromyographic (EMG) muscular data or electroencephalographic (EEG) brain data for this purpose is often limited in methodological rigour, the extent to which generalisation is demonstrated, and the granularity of gestures classified. This work evaluates three architectures for multimodal fusion of EMG & EEG data in gesture classification, including a novel Hierarchical strategy, in both subject-specific and subject-independent settings.Approach. We propose an unbiased methodology for designing classifiers centred on Automated Machine Learning through Combined Algorithm Selection & Hyperparameter Optimisation (CASH); the first application of this technique to the biosignal domain. Using CASH, we introduce an end-to-end pipeline for data handling, algorithm development, modelling, and fair comparison, addressing established weaknesses among biosignal literature.Main results. EMG-EEG fusion is shown to provide significantly higher subject-independent accuracy in same-hand multi-gesture classification than an equivalent EMG classifier. Our CASH-based design methodology produces a more accurate subject-specific classifier design than recommended by literature. Our novel Hierarchical ensemble of classical models outperforms a domain-standard CNN architecture. We achieve a subject-independent EEG multiclass accuracy competitive with many subject-specific approaches used for similar, or more easily separable, problems.Significance. To our knowledge, this is the first work to establish a systematic framework for automatic, unbiased designing and testing of fusion architectures in the context of multimodal biosignal classification. We demonstrate a robust end-to-end modelling pipeline for biosignal classification problems which if adopted in future research can help address the risk of bias common in multimodal BCI studies , enabling more reliable and rigorous comparison of proposed classifiers than is usual in the domain. We apply the approach to a more complex task than typical of EMG-EEG fusion research, surpassing literature-recommended designs and verifying the efficacy of a novel Hierarchical fusion architecture.
Keywords: automated machine learning; biosignal fusion; brain-computer-interface; multimodal gesture classification.
Creative Commons Attribution license.
Similar articles
-
A Novel Bilateral Data Fusion Approach for EMG-Driven Deep Learning in Post-Stroke Paretic Gesture Recognition.Sensors (Basel). 2025 Jun 11;25(12):3664. doi: 10.3390/s25123664. Sensors (Basel). 2025. PMID: 40573553 Free PMC article.
-
Sign Language Recognition Using the Electromyographic Signal: A Systematic Literature Review.Sensors (Basel). 2023 Oct 9;23(19):8343. doi: 10.3390/s23198343. Sensors (Basel). 2023. PMID: 37837173 Free PMC article.
-
Signs and symptoms to determine if a patient presenting in primary care or hospital outpatient settings has COVID-19.Cochrane Database Syst Rev. 2022 May 20;5(5):CD013665. doi: 10.1002/14651858.CD013665.pub3. Cochrane Database Syst Rev. 2022. PMID: 35593186 Free PMC article.
-
Gesture recognition for hearing impaired people using an ensemble of deep learning models with improving beluga whale optimization-based hyperparameter tuning.Sci Rep. 2025 Jul 1;15(1):21441. doi: 10.1038/s41598-025-06680-9. Sci Rep. 2025. PMID: 40596240 Free PMC article.
-
MyoPose: position-limb-robust neuromechanical features for enhanced hand gesture recognition in colocated sEMG-pFMG armbands.J Neural Eng. 2025 Aug 14;22(4). doi: 10.1088/1741-2552/adf888. J Neural Eng. 2025. PMID: 40769169
MeSH terms
LinkOut - more resources
Full Text Sources