IoT-driven smart assistive communication system for the hearing impaired with hybrid deep learning models for sign language recognition
- PMID: 39979401
- PMCID: PMC11842577
- DOI: 10.1038/s41598-025-89975-1
IoT-driven smart assistive communication system for the hearing impaired with hybrid deep learning models for sign language recognition
Abstract
Deaf and hard-of-hearing people utilize sign language recognition (SLR) to interconnect. Sign language (SL) is vital for hard-of-hearing and deaf individuals to communicate. SL uses varied hand gestures to speak words, sentences, or letters. It aids in linking the gap of communication between individuals with hearing loss and other persons. Also, it creates comfortable for individuals with hearing loss to convey their feelings. The Internet of Things (IoTs) can help persons with disabilities sustain their desire to attain a good quality of life and permit them to contribute to their economic and social lives. Modern machine learning (ML) and computer vision (CV) developments have allowed SL gesture detection and decipherment. This study presents a Smart Assistive Communication System for the Hearing-Impaired using Sign Language Recognition with Hybrid Deep Learning (SACHI-SLRHDL) methodology in IoT. The SACHI-SLRHDL technique aims to assist people with hearing impairments by creating an intelligent solution. At the primary stage, the SACHI-SLRHDL technique utilizes bilateral filtering (BF) for image pre-processing to increase the excellence of the captured images by reducing noise while preserving edges. Furthermore, the improved MobileNetV3 model is employed for the feature extraction process. Moreover, the convolutional neural network with a bidirectional gated recurrent unit and attention (CNN-BiGRU-A) model classifier is implemented for the SLR process. Finally, the attraction-repulsion optimization algorithm (AROA) adjusts the hyperparameter values of the CNN-BiGRU-A method optimally, resulting in more excellent classification performance. To exhibit the more significant solution of the SACHI-SLRHDL method, a comprehensive experimental analysis is performed under an Indian SL dataset. The experimental validation of the SACHI-SLRHDL method portrayed a superior accuracy value of 99.19% over existing techniques.
Keywords: Communication systems; Hearing impaired people; Hybrid deep learning; MobileNetV3; Sign Language Recognition.
© 2025. The Author(s).
Conflict of interest statement
Declarations. Competing interests: The authors declare no competing interests.
Figures















References
-
- Kasapbaşi, A., Elbushra, A. E. A., Omar, A. H. & Yilmaz, A. DeepASLR: A CNN based human computer interface for American Sign Language recognition for hearing-impaired individuals. Comput. Methods Progr. Biomed. Update2, 100048 (2022).
-
- Saleh, Y. & Issa, G. Arabic sign language recognition through deep neural networks fine-tuning (2020).
-
- Narayanan, V., Nithya, P. & Sathya, M. Effective lung cancer detection using deep learning network. J. Cogn. Hum Comput Interact. 2, 15 – 5 (2023).
-
- Saraladeve, L. et al. A multiclass attack classification Framework for IoT using Hybrid Deep Learning Model. J. Cybersecur. Inform. Manag.15(1) (2025).
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Miscellaneous