Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Feb 20;15(1):6192.
doi: 10.1038/s41598-025-89975-1.

IoT-driven smart assistive communication system for the hearing impaired with hybrid deep learning models for sign language recognition

Affiliations

IoT-driven smart assistive communication system for the hearing impaired with hybrid deep learning models for sign language recognition

Mashael Maashi et al. Sci Rep. .

Abstract

Deaf and hard-of-hearing people utilize sign language recognition (SLR) to interconnect. Sign language (SL) is vital for hard-of-hearing and deaf individuals to communicate. SL uses varied hand gestures to speak words, sentences, or letters. It aids in linking the gap of communication between individuals with hearing loss and other persons. Also, it creates comfortable for individuals with hearing loss to convey their feelings. The Internet of Things (IoTs) can help persons with disabilities sustain their desire to attain a good quality of life and permit them to contribute to their economic and social lives. Modern machine learning (ML) and computer vision (CV) developments have allowed SL gesture detection and decipherment. This study presents a Smart Assistive Communication System for the Hearing-Impaired using Sign Language Recognition with Hybrid Deep Learning (SACHI-SLRHDL) methodology in IoT. The SACHI-SLRHDL technique aims to assist people with hearing impairments by creating an intelligent solution. At the primary stage, the SACHI-SLRHDL technique utilizes bilateral filtering (BF) for image pre-processing to increase the excellence of the captured images by reducing noise while preserving edges. Furthermore, the improved MobileNetV3 model is employed for the feature extraction process. Moreover, the convolutional neural network with a bidirectional gated recurrent unit and attention (CNN-BiGRU-A) model classifier is implemented for the SLR process. Finally, the attraction-repulsion optimization algorithm (AROA) adjusts the hyperparameter values of the CNN-BiGRU-A method optimally, resulting in more excellent classification performance. To exhibit the more significant solution of the SACHI-SLRHDL method, a comprehensive experimental analysis is performed under an Indian SL dataset. The experimental validation of the SACHI-SLRHDL method portrayed a superior accuracy value of 99.19% over existing techniques.

Keywords: Communication systems; Hearing impaired people; Hybrid deep learning; MobileNetV3; Sign Language Recognition.

PubMed Disclaimer

Conflict of interest statement

Declarations. Competing interests: The authors declare no competing interests.

Figures

Fig. 1
Fig. 1
Overall flow of the SACHI-SLRHDL model.
Fig. 2
Fig. 2
Structure of BF model.
Fig. 3
Fig. 3
MobileNetV3 architecture.
Fig. 4
Fig. 4
Structure of CNN-BiGRU-A method.
Fig. 5
Fig. 5
Structure of AROA method.
Fig. 6
Fig. 6
Sample images.
Fig. 7
Fig. 7
Confusion matrix of (a-c) TRAPS of 80% and 70% and (b-d) TESPS of 20% and 30%.
Fig. 8
Fig. 8
Average of SACHI-SLRHDL approach under 80%TRAPS and 20%TESPS.
Fig. 9
Fig. 9
Average of SACHI-SLRHDL approach under 70%TRAPS and 30%TESPS.
Fig. 10
Fig. 10
formula image curve of SACHI-SLRHDL approach under 80%TRAPS and 20%TESPS.
Fig. 11
Fig. 11
Loss curve of SACHI-SLRHDL approach under 80%TRAPS and 20%TESPS.
Fig. 12
Fig. 12
PR curve of SACHI-SLRHDL approach at 80%TRAPS and 20%TESPS.
Fig. 13
Fig. 13
ROC curve of SACHI-SLRHDL approach under 80%TRAPS and 20%TESPS.
Fig. 14
Fig. 14
Comparative outcome of SACHI-SLRHDL technique with recent models.
Fig. 15
Fig. 15
CT evaluation of the SACHI-SLRHDL technique with existing methods.

References

    1. Kasapbaşi, A., Elbushra, A. E. A., Omar, A. H. & Yilmaz, A. DeepASLR: A CNN based human computer interface for American Sign Language recognition for hearing-impaired individuals. Comput. Methods Progr. Biomed. Update2, 100048 (2022).
    1. Saleh, Y. & Issa, G. Arabic sign language recognition through deep neural networks fine-tuning (2020).
    1. Narayanan, V., Nithya, P. & Sathya, M. Effective lung cancer detection using deep learning network. J. Cogn. Hum Comput Interact. 2, 15 – 5 (2023).
    1. Wen, F., Zhang, Z., He, T. & Lee, C. AI enabled sign language recognition and VR space bidirectional communication using triboelectric smart glove. Nat. Commun.12(1), 5378 (2021). - PMC - PubMed
    1. Saraladeve, L. et al. A multiclass attack classification Framework for IoT using Hybrid Deep Learning Model. J. Cybersecur. Inform. Manag.15(1) (2025).

LinkOut - more resources