Empowering emotional intelligence through deep learning techniques
- PMID: 41339679
- PMCID: PMC12764474
- DOI: 10.1038/s41598-025-29073-4
Empowering emotional intelligence through deep learning techniques
Abstract
We propose that employing an ensemble of deep learning models can enhance the recognition and adaptive response to human emotions, outperforming the use of single model. Our study introduces a multimodal emotional intelligence system that blends CNNs for facial emotion detection, BERT for text mood analysis, RNNs for tracking emotions over time, and GANs for creating emotion-specific content. We built these models with TensorFlow, Keras, and PyTorch, and trained them on Kaggle datasets, including FER-2013 for facial expressions and labeled text data for sentiment tasks. Our experiments show strong results: CNNs reach about 80% accuracy in recognizing facial emotions, BERT achieves about 92% accuracy in text sentiment, RNNs reach around 89% for sequential emotion tracking, and GANs produce personalized, age-related content that is judged contextually appropriate in over 90% of test cases. These findings support the idea that a combined model architecture can yield more accurate and adaptable emotional responses than simpler approaches. The framework could be useful in areas such as healthcare, customer service, education, and digital well-being, helping to create AI systems that are more empathetic and user-focused.
Keywords: Bidirectional Encoder Representations from Transformers (BERT); Convolutional Neural Networks (CNN); Deep learning; Emotional intelligence; Facial emotion recognition; Generative Adversarial Networks (GANs); Human–computer interaction; Multimodal emotion recognition; Recurrent Neural Networks (RNN); Sentiment analysis.
© 2025. The Author(s).
Conflict of interest statement
Declarations. Competing interests: The authors declare no competing interests. Consent for Publication: All individuals featured in the submitted images provided written consent for their use in this publication. I confirm that all methods were carried out in accordance with relevant guidelines and regulations. I further confirm that all experimental protocols were approved by the VIT AP university. Informed consent was obtained from all subjects to participation in the original data collection. Additionally, informed consent for publication of the data and images was also obtained from all subjects and/or their legal guardians.
Figures
References
-
- Tan, M., Le, Q. EfficientNet: Rethinking model scaling for convolutional neural networks. Proc. of the 36th International Conference on Machine Learning (ICML). https://arxiv.org/pdf/1905.11946 (2019)
-
- Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C. D., Ng, A. Y., Potts, C. Semisupervised recursive deep learning for sentiment analysis. Proc. of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP). https://aclanthology.org/D11-1014.pdf (2011)
-
- Poria, S., Cambria, E., Hazarika, D., Zhao, H. Multimodal sentiment analysis for social media contents during public emergencies. Proceedings of the 2017 IEEE International Conference on Data Mining (ICDM). https://scispace.com/pdf/multimodal-sentiment-analysis-for-social-media-... (2017)
-
- Poria, S., Majumder, N., Hazarika, D., & et al. End-to-end multimodal emotion recognition using deep neural networks. https://arxiv.org/pdf/1704.08619 (2017)
-
- Zhang, J., Pan, Z., Li, B. Improved deep convolutional neural network for emotion recognition in human–robot interaction systems. IEEE Access 7 140931-140939. https://ieeexplore.ieee.org/abstract/document/8741491 (2019)
MeSH terms
LinkOut - more resources
Full Text Sources
