Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Feb 3;23(3):1743.
doi: 10.3390/s23031743.

Multi-Input Speech Emotion Recognition Model Using Mel Spectrogram and GeMAPS

Affiliations

Multi-Input Speech Emotion Recognition Model Using Mel Spectrogram and GeMAPS

Itsuki Toyoshima et al. Sensors (Basel). .

Abstract

The existing research on emotion recognition commonly uses mel spectrogram (MelSpec) and Geneva minimalistic acoustic parameter set (GeMAPS) as acoustic parameters to learn the audio features. MelSpec can represent the time-series variations of each frequency but cannot manage multiple types of audio features. On the other hand, GeMAPS can handle multiple audio features but fails to provide information on their time-series variations. Thus, this study proposes a speech emotion recognition model based on a multi-input deep neural network that simultaneously learns these two audio features. The proposed model comprises three parts, specifically, for learning MelSpec in image format, learning GeMAPS in vector format, and integrating them to predict the emotion. Additionally, a focal loss function is introduced to address the imbalanced data problem among the emotion classes. The results of the recognition experiments demonstrate weighted and unweighted accuracies of 0.6657 and 0.6149, respectively, which are higher than or comparable to those of the existing state-of-the-art methods. Overall, the proposed model significantly improves the recognition accuracy of the emotion "happiness", which has been difficult to identify in previous studies owing to limited data. Therefore, the proposed model can effectively recognize emotions from speech and can be applied for practical purposes with future development.

Keywords: GeMAPS; focal loss function; mel spectrogram; multi-input deep neural network; speech emotion recognition.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Architecture of proposed model.
Figure 2
Figure 2
Confusion matrixes obtained in four experimental settings.

References

    1. Kolakowska A., Szwoch W., Szwoch M. A Review of Emotion Recognition Methods Based on Data Acquired via Smartphone Sensors. Sensors. 2020;20:6367. doi: 10.3390/s20216367. - DOI - PMC - PubMed
    1. Fahad S., Ranjan A., Yadav J., Deepak A. A survey of speech emotion recognition in natural environment. Digit. Signal Process. 2021;110:102951. doi: 10.1016/j.dsp.2020.102951. - DOI
    1. Zhuang J., Guan Y., Nagayoshi H., Muramatu K., Nagayoshi H., Watanuki K., Tanaka E. Real-time emotion recognition system with multiple physiological signals. J. Adv. Mech. Des. Syst. Manuf. 2019;13:JAMDSM0075. doi: 10.1299/jamdsm.2019jamdsm0075. - DOI
    1. Wei L., Wei-Long Z., Bao-Liang L. Neural Information Processing: ICONIP 2016. Volume 9948. Springer; Cham, Switzerland: 2016. Emotion recognition using multimodal deep learning; pp. 521–529. Lecture Notes in Computer Science.
    1. Alsharekh M.F. Facial Emotion Recognition in Verbal Communication Based on Deep Learning. Sensors. 2022;22:6105. doi: 10.3390/s22166105. - DOI - PMC - PubMed

LinkOut - more resources