Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Aug 26;379(1908):20230245.
doi: 10.1098/rstb.2023.0245. Epub 2024 Jul 15.

Interaction between the prefrontal and visual cortices supports subjective fear

Affiliations

Interaction between the prefrontal and visual cortices supports subjective fear

Vincent Taschereau-Dumouchel et al. Philos Trans R Soc Lond B Biol Sci. .

Abstract

It has been reported that threatening and non-threatening visual stimuli can be distinguished based on the multi-voxel patterns of haemodynamic activity in the human ventral visual stream. Do these findings mean that there may be evolutionarily hardwired mechanisms within early perception, for the fast and automatic detection of threat, and maybe even for the generation of the subjective experience of fear? In this human neuroimaging study, we presented participants ('fear' group: N = 30; 'no fear' group: N = 30) with 2700 images of animals that could trigger subjective fear or not as a function of the individual's idiosyncratic 'fear profiles' (i.e. fear ratings of animals reported by a given participant). We provide evidence that the ventral visual stream may represent affectively neutral visual features that are statistically associated with fear ratings of participants, without representing the subjective experience of fear itself. More specifically, we show that patterns of haemodynamic activity predictive of a specific 'fear profile' can be observed in the ventral visual stream whether a participant reports being afraid of the stimuli or not. Further, we found that the multivariate information synchronization between ventral visual areas and prefrontal regions distinguished participants who reported being subjectively afraid of the stimuli from those who did not. Together, these findings support the view that the subjective experience of fear may depend on the relevant visual information triggering implicit metacognitive mechanisms in the prefrontal cortex. This article is part of the theme issue 'Sensing and feeling: an integrative approach to sensory processing and emotional experience'.

Keywords: amygdala; artificial neural networks; fear; prefrontal cortex; subjective experience.

PubMed Disclaimer

Conflict of interest statement

We declare we have no competing interests.

Figures

Figure 1.
Figure 1.
(a) Animal categories included in the fMRI experiment (see §2d for a complete list). (b) Participants were presented with a series of 3600 images of animals and human-made objects, each lasting 0.98 s. They were asked to pay attention to the image category and report any category change (e.g. from ‘cat’ to ‘cockroach’ as shown in the figure) with a button press. (c) Participants reporting high fear of some animals in the dataset, presented a unique ‘fear profile’. Those profiles were decoded using (1) the participants' brain data (‘fear’ participant) or (2) the brain data of other participants that were also presented with the same images (‘no fear’ participants). The decoding of the fear profile of each participant in the ‘Fear’ group was compared to the mean decoding of that specific fear profile in the 30 participants in the ‘no fear’ group’. ROI, region of interest.
Figure 2.
Figure 2.
Prediction of the fear profiles in participants with ('fear' group) and without (‘no fear’ group) subjective fear of the animals. (a–e) Generally, the fine-grained spatial patterns of haemodynamic activity in the entire ventral visual stream (VT) and within all four subregions (occipital cortex, Occ; fusiform gyrus, Fus; inferotemporal cortex, IT; and middle temporal cortex, MT) can distinguish, better than chance, between images of threatening and non-threatening animal categories (p-values are computed with respect to the permutation of all categories; see §4a for statistical information). This is shown by comparing mean decoding performance, within each group, against two random distributions of group means obtained by conducting decoding of (1) randomly permuted category labels (light blue) and (2) randomly permuted category labels within high- and low-fear ratings independently (dark blue; see §3a for more details). (f) No group differences were observed, indicating that above-chance decoding is obtained regardless of whether the human participants in question reported being subjectively afraid of the typically threatening animal categories. This dissociation between subjective fear and stimulus threat was possible because some ‘threatening’ animals (e.g. cockroaches) were only frightening to some but not all participants. Violin shapes represent density and dots individual participants (fear group) or group mean (no fear group). Central dot represents the mean and error bars' edges the 1st and 3rd quantiles. (g) ROIs based on the Brainnetome atlas and displayed using pySurfer (https://github.com/nipy/PySurfer/).
Figure 4.
Figure 4.
Difference in information synchronization between ventral visual regions and other brain areas, between participants with and without subjective fear of ‘threatening’ stimuli. Colour codes represent the t-values of the between-group differences in a measure of information synchronization. The measure essentially captures how the multivoxel pattern in a seed region (Occ, Fus, IT; same labels as used in figure 2), with respect to the degree to which it can distinguish between threatening versus non-threatening stimuli, can be predicted by the multivoxel pattern in another ‘target’ region (para-hippocampal area, ParHip; amygdala, Amyg; hippocampus, Hipp; orbitofrontal cortex, OFC; ventromedial prefrontal cortex, vmPFC; medial prefrontal cortex, mPFC; ventrolateral prefrontal cortex, vlPFC; insula, Ins; dorsolateral prefrontal cortex, dlPFC). Specifically, what is plotted is not the absolute value of information synchronization, but rather the difference in these values between participants who reported to be afraid of the relevant threatening stimuli and participants who reported not to feel so. Pathways that are significantly different between the two groups of participants, after Bonferroni correction are marked with asterisks (*) (see §4c for statistical details). In other words, these information synchronization pathways distinguished between different levels of self-reported subjective fear (across participants), while the physical stimuli (including both images of typically threatening and non-threatening animal categories) were held constant. MPFC/ACC, medial prefrontal cortex/anterior cingulate cortex. The image of the ROIs was generated based on the Brainnetome atlas using pySurfer (https://github.com/nipy/PySurfer/).
Figure 3.
Figure 3.
(a) Fear profiles of participants can be predicted from the activity generated by the 2700 images in the artificial (deep) neural networks: CLIP (the vision ‘transformer’). By fear profile we mean the different self-reported subjective fear scores over all the animal categories, for an individual participant. Based on the pattern of activity in ‘latent space' within the artificial neural network over many stimuli, we tried to predict these fear profiles for each participant. The r2 coefficient is a measure of how well activity from the ‘latent-space’ of the network (see main text for more details), can accurately predict the fear profile over different animal categories. These results indicate that CLIP can perform far better than chance (see main text for statistics). (b) Synthetic images generated using the decoders of fear profiles of 4 participants (based on the CLIP embeddings). To understand the nature of the relevant representations within these networks that allowed the above results, we used an optimization procedure and StableUnCLIP (see https://huggingface.co/docs/diffusers/api/pipelines/stable_unclip) to generate synthetic images that represent the ‘prototypical’ content for some fear profiles of participants. As one can see, these synthetic images do not necessarily resemble animals but include visual features of some of the most feared animals in the participants' profile (from left to right, bee, worm, caterpillar and spider). Based on our own subjective inspection, the synthetic images do not necessarily appear to be fear-inducing.

References

    1. Taschereau-Dumouchel V, Kawato M, Lau H. 2020. Multivoxel pattern analysis reveals dissociations between subjective fear and its physiological correlates. Mol. Psychiatry 25, 2342-2354. ( 10.1038/s41380-019-0520-3) - DOI - PMC - PubMed
    1. Kragel PA, Reddan MC, LaBar KS, Wager TD. 2019. Emotion schemas are embedded in the human visual system. Sci. Adv. 5, eaaw4358. ( 10.1126/sciadv.aaw4358) - DOI - PMC - PubMed
    1. Pessoa L, Adolphs R. 2010. Emotion processing and the amygdala: from a ‘low road’ to ‘many roads’ of evaluating biological significance. Nat. Rev. Neurosci. 11, 773-782. ( 10.1038/nrn2920) - DOI - PMC - PubMed
    1. LeDoux JE. 1996. The emotional brain: The mysterious underpinnings of emotional life. New York, NY: Simon and Schuster.
    1. LeDoux JE, Pine DS. 2016. Using Neuroscience to Help Understand Fear and Anxiety: A Two-System Framework. Am. J. Psychiatry 173, 1083-1093. ( 10.1176/appi.ajp.2016.16030353) - DOI - PubMed