Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 May 15;12(1):797.
doi: 10.1038/s41597-025-05159-6.

Lights, Camera, Emotion: REELMO's 1060 Hours of Affective Reports to Explore Emotions in Naturalistic Contexts

Affiliations

Lights, Camera, Emotion: REELMO's 1060 Hours of Affective Reports to Explore Emotions in Naturalistic Contexts

Erika Sampaolo et al. Sci Data. .

Abstract

Emotions are central to human experience, yet their complexity and context-dependent nature challenge traditional laboratory studies. We present REELMO (REal-time EmotionaL responses to MOvies), a novel dataset bridging controlled experiments and naturalistic affective experiences. REELMO includes 1,060 hours of moment-by-moment emotional reports across 20 affective states collected during the viewing of 60 full-length movies, along with additional measures of personality traits, empathy, movie synopses, and overall liking from 161 participants. It also features fMRI data from 20 volunteers recorded while watching the full-length movie Jojo Rabbit. Complemented by visual and acoustic features as well as semantic content derived from deep-learning models, REELMO provides a comprehensive platform for advancing emotion research. Its high temporal resolution, rich annotations, and integration with fMRI data enable investigations into the interplay between sensory information, narrative structures, and contextual factors in shaping emotional experiences, as well as the study of affective chronometry, mixed-valence states, psychological trait influences, and machine learning applications in affective (neuro)science.

PubMed Disclaimer

Conflict of interest statement

Competing interests: The authors declare no competing interests.

Figures

Fig. 1
Fig. 1
Structure of the behavioral dataset. (a) Overview of the general directory structure, organized into folders corresponding to each of the 60 full-length movies. (b) Detailed contents of each movie folder, including computational features and semantic annotations derived from the films.
Fig. 2
Fig. 2
Structure of the fMRI dataset. (a) Overview of the general directory structure. (b) Detailed contents of the “jojo-rabbit” folder, including the “fmri” subfolder. (c) Organization of subject-specific directories, showing functional and anatomical data along with derivatives.
Fig. 3
Fig. 3
Panel (a) shows Jojo Rabbit emotion annotations at the individual level, with each emotion-by-time matrix representing one participant’s report. Color indicates emotion intensity, and dashed lines mark timepoints where the movie was split into shorter runs to reduce fatigue. Panel (b) presents group-level annotations, obtained by binarizing individual data (emotion present/absent) and summing across participants. The resulting matrix was scaled by its global maximum for comparability. Panel (c) displays the frequency of each reported emotion during the movie, derived from the group matrix and scaled by total occurrences. Panel (d) shows pairwise dissimilarity between all movies, based on cosine distances computed from movie-by-emotion matrices. Brighter colors reflect greater emotional divergence. Hierarchical clustering (average linkage) and the gap statistic identified the optimal number of clusters. Panel (e) summarizes key features of each cluster, including representative movies, dominant emotions, and common genres. Movie title size reflects distance from the cluster centroid; emotion and genre size indicate average frequency within the cluster. Panel (f) illustrates emotion co-occurrence across movies. Cosine dissimilarities between emotion categories were computed and clustered, revealing a valence-based structure: positive (blue) and negative (red) emotions formed distinct clusters, with finer sub-groupings such as (1) anguish–agitation–fear, (2) sadness–compassion, (3) anger–contempt, (4) confusion–disgust–uneasiness, and (5) joy–tenderness–amazement.
Fig. 4
Fig. 4
Panel (a) shows the unthresholded intersubject correlation (ISC) map computed during the viewing of Jojo Rabbit, with strongest synchronization in auditory and visual areas, and additional effects in transmodal regions such as the precuneus, parietal lobules, and medial prefrontal cortex. Panel (b) presents results from the affective encoding analysis: group-level emotion ratings (n = 47) were used to predict fMRI activity in an independent sample (n = 20). Color intensity reflects R2 values (thresholded at 0.04), with notable peaks in emotion-related regions such as the ventromedial prefrontal cortex (R2 = 0.093), amygdala (R2 = 0.050), and subgenual anterior cingulate cortex (R2 = 0.071). Panel (c) shows results from meta-analytic decoding of this map, based on correlations with 50 topic maps from Neurosynth (LDA-based, GPT-4o-labeled). Top associated topics include “theory of mind and social cognition” (r = 0.216), “social interaction, empathy, and moral cognition” (r = 0.174), and “emotion processing and affective regulation” (r = 0.115). To validate sensory encoding, we repeated the analysis using acoustic and visual features. Panel (d) shows encoding of sound energy (R2 > 0.04), with peaks in auditory regions such as the transverse temporal gyrus (R2 = 0.145) and area 55b (R2 = 0.070). Corresponding decoding results (panel e) highlight topics like “auditory and speech perception” (r = 0.617) and “language processing and reading” (r = 0.340). Panel (f) presents encoding of visual features (R2 > 0.01), with highest values in visual and multisensory areas including the calcarine sulcus (R2 = 0.025), fusiform gyrus (R2 = 0.014), parahippocampal gyrus (R2 = 0.025), and posterior superior temporal sulcus (R2 = 0.015). Meta-analytic decoding (panel g) confirms relevance to topics such as “semantic representations and object knowledge” (r = 0.439), “multisensory integration and sensory modalities” (r = 0.402), and “motion perception and visual processing” (r = 0.254).

Similar articles

References

    1. Grall, C. & Finn, E. S. Leveraging the power of media to drive cognition: A media-informed approach to naturalistic neuroscience. Social Cognitive and Affective Neuroscience17(6), 598–608 (2022). - PMC - PubMed
    1. Bordwell D., Thompson K., Smith J. Film Art: An Introduction (Eleventh Edition). New York: McGraw-Hill Education (2016).
    1. Lang, P. J., Bradley, M. M. & Cuthbert, B. N. International affective picture system (IAPS): Technical manual and affective ratings. NIMH Center for the Study of Emotion and Attention1(39-58), 3 (1997).
    1. Marchewka, A., Żurawski, Ł., Jednoróg, K. & Grabowska, A. The Nencki Affective Picture System (NAPS): Introduction to a novel, standardized, wide-range, high-quality, realistic picture database. Behavior research methods46, 596–610 (2014). - PMC - PubMed
    1. Westermann, R., Spies, K., Stahl, G. & Hesse, F. W. Relative effectiveness and validity of mood induction procedures: A meta‐analysis. European Journal of social psychology26(4), 557–580 (1996).

LinkOut - more resources