Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Jul 5;25(13):4200.
doi: 10.3390/s25134200.

Multi-Scale Attention Fusion Gesture-Recognition Algorithm Based on Strain Sensors

Affiliations

Multi-Scale Attention Fusion Gesture-Recognition Algorithm Based on Strain Sensors

Zhiqiang Zhang et al. Sensors (Basel). .

Abstract

Surface electromyography (sEMG) signals are commonly employed for dynamic-gesture recognition. However, their robustness is often compromised by individual variability and sensor placement inconsistencies, limiting their reliability in complex and unconstrained scenarios. In contrast, strain-gauge signals offer enhanced environmental adaptability by stably capturing joint deformation processes. To address the challenges posed by the multi-channel, temporal, and amplitude-varying nature of strain signals, this paper proposes a lightweight hybrid attention network, termed MACLiteNet. The network integrates a local temporal modeling branch, a multi-scale fusion module, and a channel reconstruction mechanism to jointly capture local dynamic transitions and inter-channel structural correlations. Experimental evaluations conducted on both a self-collected strain-gauge dataset and the public sEMG benchmark NinaPro DB1 demonstrate that MACLiteNet achieves recognition accuracies of 99.71% and 98.45%, respectively, with only 0.22M parameters and a computational cost as low as 0.10 GFLOPs. Extensive experimental results demonstrate that the proposed method achieves superior performance in terms of accuracy, efficiency, and cross-modal generalization, offering a promising solution for building efficient and reliable strain-driven interactive systems.

Keywords: cross-modal recognition; dynamic-gesture recognition; hybrid attention mechanism; multi-scale feature fusion; strain sensors.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflicts of interest.

Figures

Figure 1
Figure 1
Structural diagram of the multi-channel strain signal acquisition and processing system.
Figure 2
Figure 2
Schematic diagram of strain-gauge sensor layout.
Figure 3
Figure 3
Classification and execution diagram for the dynamic gestures. Each gesture involves a continuous transition from an initial neutral state to the final hand posture. For gestures with intermediate transitional postures, arrows are used to indicate the direction of change between steps.
Figure 4
Figure 4
Overall architecture of the proposed lightweight multi-branch hybrid attention network, MACLiteNet.
Figure 5
Figure 5
Structure of the Causal Depthwise Separable Convolution block.
Figure 6
Figure 6
Structure of the MSF module.
Figure 7
Figure 7
Structure of the DPCA module.
Figure 8
Figure 8
Average confusion matrices over five-fold cross-validation under different network configurations: (a) Baseline; (b) Baseline + LTMB; (c) Baseline + MSF; (d) Baseline + DPAC; (e) Baseline + MSF + DPAC; and (f) Proposed.
Figure 8
Figure 8
Average confusion matrices over five-fold cross-validation under different network configurations: (a) Baseline; (b) Baseline + LTMB; (c) Baseline + MSF; (d) Baseline + DPAC; (e) Baseline + MSF + DPAC; and (f) Proposed.
Figure 9
Figure 9
Training process comparison for the proposed model, on different modalities: (a) training accuracy and loss curves on the self-constructed strain-gauge dataset; (b) training accuracy and loss curves on the public sEMG dataset (NinaPro DB1).
Figure 10
Figure 10
Performance comparisons of different models relative to key classification metrics.

References

    1. Dutta H.P.J., Bhuyan M.K., Neog D.R., MacDorman K.F., Laskar R.H. Patient Assistance System Based on Hand Gesture Recognition. IEEE Trans. Instrum. Meas. 2023;72:5018013. doi: 10.1109/TIM.2023.3282655. - DOI
    1. Wang J., Deng H., Wang Y., Xie J., Zhang H., Li Y., Guo S. Multi-sensor fusion federated learning method of human posture recognition for dual-arm nursing robots. Inf. Fusion. 2024;107:102320. doi: 10.1016/j.inffus.2024.102320. - DOI
    1. Du G., Guo D., Su K., Wang X., Teng S., Li D., Liu P.X. A Mobile Gesture Interaction Method for Augmented Reality Games Using Hybrid Filters. IEEE Trans. Instrum. Meas. 2022;71:9507612. doi: 10.1109/TIM.2022.3181305. - DOI
    1. Sun Y., Xu C., Li G., Xu W., Kong J., Jiang D., Tao B., Chen D. Intelligent human computer interaction based on non redundant EMG signal. Alex. Eng. J. 2020;59:1149–1157. doi: 10.1016/j.aej.2020.01.015. - DOI
    1. Zhou X., Jia L., Bai R., Xue C. DigCode—A generic mid-air gesture coding method on human-computer interaction. Int. J. Hum.-Comput. Stud. 2024;189:103302. doi: 10.1016/j.ijhcs.2024.103302. - DOI

Grants and funding

LinkOut - more resources