Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Aug;56(5):5103-5115.
doi: 10.3758/s13428-023-02249-4. Epub 2023 Oct 11.

The Jena Audiovisual Stimuli of Morphed Emotional Pseudospeech (JAVMEPS): A database for emotional auditory-only, visual-only, and congruent and incongruent audiovisual voice and dynamic face stimuli with varying voice intensities

Affiliations

The Jena Audiovisual Stimuli of Morphed Emotional Pseudospeech (JAVMEPS): A database for emotional auditory-only, visual-only, and congruent and incongruent audiovisual voice and dynamic face stimuli with varying voice intensities

Celina I von Eiff et al. Behav Res Methods. 2024 Aug.

Abstract

We describe JAVMEPS, an audiovisual (AV) database for emotional voice and dynamic face stimuli, with voices varying in emotional intensity. JAVMEPS includes 2256 stimulus files comprising (A) recordings of 12 speakers, speaking four bisyllabic pseudowords with six naturalistic induced basic emotions plus neutral, in auditory-only, visual-only, and congruent AV conditions. It furthermore comprises (B) caricatures (140%), original voices (100%), and anti-caricatures (60%) for happy, fearful, angry, sad, disgusted, and surprised voices for eight speakers and two pseudowords. Crucially, JAVMEPS contains (C) precisely time-synchronized congruent and incongruent AV (and corresponding auditory-only) stimuli with two emotions (anger, surprise), (C1) with original intensity (ten speakers, four pseudowords), (C2) and with graded AV congruence (implemented via five voice morph levels, from caricatures to anti-caricatures; eight speakers, two pseudowords). We collected classification data for Stimulus Set A from 22 normal-hearing listeners and four cochlear implant users, for two pseudowords, in auditory-only, visual-only, and AV conditions. Normal-hearing individuals showed good classification performance (McorrAV = .59 to .92), with classification rates in the auditory-only condition ≥ .38 correct (surprise: .67, anger: .51). Despite compromised vocal emotion perception, CI users performed above chance levels of .14 for auditory-only stimuli, with best rates for surprise (.31) and anger (.30). We anticipate JAVMEPS to become a useful open resource for researchers into auditory emotion perception, especially when adaptive testing or calibration of task difficulty is desirable. With its time-synchronized congruent and incongruent stimuli, JAVMEPS can also contribute to filling a gap in research regarding dynamic audiovisual integration of emotion perception via behavioral or neurophysiological recordings.

Keywords: Adaptive testing; Audiovisual integration; Cochlear implant; Emotion; Emotion induction; Stimulus database; Voice morphing.

PubMed Disclaimer

Conflict of interest statement

The authors report no conflicts of interest, financial, or otherwise.

Figures

Fig. 1
Fig. 1
Still-image frame examples of the seven emotions contained in JAVMEPS
Fig. 2
Fig. 2
Classification performance of normal-hearing individuals and CI users for the seven emotions contained in JAVMEPS, separately for auditory-only, visual-only, and AV stimuli. Note different scaling across plots in the interest of visibility

Similar articles

Cited by

References

    1. Agrawal, D., Thorne, J. D., Viola, F. C., Timm, L., Debener, S., Büchner, A., & Wittfoth, M. (2013). Electrophysiological responses to emotional prosody perception in cochlear implant users. NeuroImage: Clinical,2, 229–238. - PMC - PubMed
    1. Ambadar, Z., Schooler, J. W., & Cohn, J. F. (2005). Deciphering the enigmatic face the importance of facial dynamics in interpreting subtle facial expressions. Psychological Science,16(5), 403–410. - PubMed
    1. Ambert-Dahan, E., Giraud, A. L., Mecheri, H., Sterkers, O., Mosnier, I., & Samson, S. (2017). Emotional recognition of dynamic facial expressions before and after cochlear implantation in adults with progressive deafness. Hearing Research,354, 64–72. - PubMed
    1. Baart, M., & Vroomen, J. (2018). Recalibration of vocal affect by a dynamic face. Experimental Brain Research,236(7), 1911–1918. - PMC - PubMed
    1. Bänziger, T., Mortillaro, M., & Scherer, K. R. (2012). Introducing the Geneva Multimodal expression corpus for experimental research on emotion perception. Emotion,12(5), 1161–1179. - PubMed

Publication types

LinkOut - more resources