Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 May 25:10:2700316.
doi: 10.1109/JTEHM.2022.3177710. eCollection 2022.

Deep CNN-LSTM With Self-Attention Model for Human Activity Recognition Using Wearable Sensor

Affiliations

Deep CNN-LSTM With Self-Attention Model for Human Activity Recognition Using Wearable Sensor

Mst Alema Khatun et al. IEEE J Transl Eng Health Med. .

Abstract

Human Activity Recognition (HAR) systems are devised for continuously observing human behavior - primarily in the fields of environmental compatibility, sports injury detection, senior care, rehabilitation, entertainment, and the surveillance in intelligent home settings. Inertial sensors, e.g., accelerometers, linear acceleration, and gyroscopes are frequently employed for this purpose, which are now compacted into smart devices, e.g., smartphones. Since the use of smartphones is so widespread now-a-days, activity data acquisition for the HAR systems is a pressing need. In this article, we have conducted the smartphone sensor-based raw data collection, namely H-Activity, using an Android-OS-based application for accelerometer, gyroscope, and linear acceleration. Furthermore, a hybrid deep learning model is proposed, coupling convolutional neural network and long-short term memory network (CNN-LSTM), empowered by the self-attention algorithm to enhance the predictive capabilities of the system. In addition to our collected dataset (H-Activity), the model has been evaluated with some benchmark datasets, e.g., MHEALTH, and UCI-HAR to demonstrate the comparative performance of our model. When compared to other models, the proposed model has an accuracy of 99.93% using our collected H-Activity data, and 98.76% and 93.11% using data from MHEALTH and UCI-HAR databases respectively, indicating its efficacy in recognizing human activity recognition. We hope that our developed model could be applicable in the clinical settings and collected data could be useful for further research.

Keywords: LSTM; Sensors; accelerometers; attention; gyroscopes; smartphones.

PubMed Disclaimer

Figures

FIGURE 1.
FIGURE 1.
The schematic diagram of our proposed workflow. Raw data are firstly acquired from sensors. After preprocessing, segments of data are extracted (known as Segmentation) and a classifier is designed. Fine tuning is used to adjust the hyper-parameters. The classifier is then trained and evaluated using those features (known as Classification).
FIGURE 2.
FIGURE 2.
Inertial sensor placement in various datasets (a) H-Activity (b) MHEALTH (c) UCI-HAR.
FIGURE 3.
FIGURE 3.
Representation of (a) user interface of the Android-based data collection application, ‘Sensors Data Collector’ before and after the data collection (b) inertial sensors in the smartphones and the direction of the accelerometer, gyroscope, and linear acceleration.
FIGURE 4.
FIGURE 4.
The proposed Deep CNN-LSTM with self-attention model architecture for human activity recognition.
FIGURE 5.
FIGURE 5.
Graphical representation of training (a) Accuracy, (b) losses, (c) F1-Scores, and (d) AUCs of M1, M2, M3, and M4, respectively.
FIGURE 6.
FIGURE 6.
Graphical representation of validation (a) Accuracy, (b) losses, (c) F1-Scores, and (d) AUCs of M1, M2, M3, and M4, respectively.
FIGURE 7.
FIGURE 7.
Influence of the optimizer on model performance during training.
FIGURE 8.
FIGURE 8.
Performance comparison of different (LSTM, CNN-LSTM with Self-Attention, LSTM-CNN, and Parallel CNN-LSTM) models.

References

    1. Abdel-Salam R., Mostafa R., and Hadhood M., Human Activity Recognition Using Wearable Sensors: Review, Challenges, Evaluation Benchmark. Singapore: Springer, Feb. 2021, pp. 1–15.
    1. Torres-Huitzil C. and Alvarez-Landero A., Accelerometer-Based Human Activity Recognition in Smartphones for Healthcare Services. Cham, Switzerland: Springer, 2015, pp. 147–169, doi: 10.1007/978-3-319-12817-7_7. - DOI
    1. Wang J., Chen Y., Hao S., Peng X., and Hu L., “Deep learning for sensor-based activity recognition: A survey,” Pattern Recognit. Lett., vol. 119, pp. 3–11, Mar. 2019, doi: 10.1016/j.patrec.2018.02.010. - DOI
    1. Wong C. K., Mentis H. M., and Kuber R., “The bit doesn’t fit: Evaluation of a commercial activity-tracker at slower walking speeds,” Gait Posture, vol. 59, pp. 177–181, Jan. 2018, doi: 10.1016/j.gaitpost.2017.10.010. - DOI - PubMed
    1. Murad A. and Pyun J.-Y., “Deep recurrent neural networks for human activity recognition,” Sensors, vol. 17, no. 11, p. 2556, Nov. 2017. [Online]. Available: https://www.mdpi.com/1424-8220/17/11/2556 - PMC - PubMed

Publication types