Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Randomized Controlled Trial
. 2013 Oct 30;33(44):17435-43.
doi: 10.1523/JNEUROSCI.2992-13.2013.

Top-down control of visual responses to fear by the amygdala

Affiliations
Randomized Controlled Trial

Top-down control of visual responses to fear by the amygdala

Nicholas Furl et al. J Neurosci. .

Abstract

The visual cortex is sensitive to emotional stimuli. This sensitivity is typically assumed to arise when amygdala modulates visual cortex via backwards connections. Using human fMRI, we compared dynamic causal connectivity models of sensitivity with fearful faces. This model comparison tested whether amygdala modulates distinct cortical areas, depending on dynamic or static face presentation. The ventral temporal fusiform face area showed sensitivity to fearful expressions in static faces. However, for dynamic faces, we found fear sensitivity in dorsal motion-sensitive areas within hMT+/V5 and superior temporal sulcus. The model with the greatest evidence included connections modulated by dynamic and static fear from amygdala to dorsal and ventral temporal areas, respectively. According to this functional architecture, amygdala could enhance encoding of fearful expression movements from video and the form of fearful expressions from static images. The amygdala may therefore optimize visual encoding of socially charged and salient information.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Motion sensitivity to facial stimuli. a, Group-level statistical parametric map for the 13 participants used in ROI and connectivity analyses. Voxels showing significant effects from the localizer runs at p < 0.001 (uncorrected) are projected on an inflated cortical surface of the right hemisphere in MNI space. Green represents voxels sensitive to facial motion; red represents face-selective voxels; yellow represents their overlap. Motion sensitivity to faces without face selectivity is visible in V5f, whereas motion sensitivity to faces and face selectivity overlap in the STS. b, Face selectivity and motion sensitivity to faces in a representative participant. c, Voxels from localizer run data showing face selectivity in bilateral amygdala at p < 0.001 uncorrected. d, Voxels from main experiment run data showing significant differences between all faces and Fourier-scrambled patterns in bilateral amygdala at p < 0.005 uncorrected.
Figure 2.
Figure 2.
Group-level ROI analysis for localizer runs. Mean responses to dynamic and static faces, objects, and random-dot patterns are shown as follows: a, the right OFA; b, the right FFA; c, the right V5f; d, the face-selective area in the STS; e, the amygdala. Error bars indicate SEM. All graphs represent the 13 participants who manifested every ROI.
Figure 3.
Figure 3.
Group-level ROI analysis for main experiment runs. a, Mean responses to dynamic and static disgust, happy, and fearful facial expressions in the right OFA. Responses are assessed relative to Fourier-scrambled pattern baseline. b, Mean responses in the right FFA. c, Mean responses in the right V5f. d, Mean responses in the face-selective area in the STS. e, Mean responses in the amygdala. *p < 0.05, enhanced responses to fearful expressions for either dynamic or static expressions. Error bars indicate SEM. All graphs represent the 13 participants who manifested every ROI.
Figure 4.
Figure 4.
Connectivity analysis results. a, The optimal model. This model was evaluated for the 13 participants with all ROIs. Exogenous inputs (of dynamic or static faces) are indicated. Gray arrows indicate endogenous connections; green arrows indicate connections modulated by static fear; blue arrows indicate connections modulated by dynamic fear. b, Relative log-evidences for two model family comparisons. The posterior probability for the family with the highest evidence is numbered above the bar for the most likely family. “Amy input” tests for evidence favoring an exogenous input to the amygdala. “Full connectivity” tests for evidence favoring models with full endogenous connectivity versus sparse models. c, Model family comparisons testing modulation of dynamic fear on different possible connections projecting to V5f and STS. d, Model family comparisons testing modulation of static fear on different possible connections projecting to FFA. amy, Amygdala.
Figure 5.
Figure 5.
Postscanning behavioral results for faces shown in main experiment runs. a, Motion intensity ratings for facial videos and Fourier phase-scrambled videos of disgust, happy, and fearful expressions. b, Mean emotional intensity ratings of veridical dynamic and static disgust, happy, and fearful expressions. c, d′ classification performance. d, Reaction times (ms) for correct expression classifications. Error bars indicate SEM. All graphs represent the 12 participants who manifested every ROI and had behavioral data.

References

    1. Amaral DG, Price JL. Amygdalo-cortical projections in the monkey (Macaca fascicularis) J Comp Neurol. 1984;230:465–496. doi: 10.1002/cne.902300402. - DOI - PubMed
    1. Anderson AK, Christoff K, Panitz D, De Rosa E, Gabrieli JD. Neural correlates of the automatic processing of threat facial signals. J Neurosci. 2003;23:5627–5633. - PMC - PubMed
    1. Brett M, Penny WD, Kiebel SJ. Introduction to random field theory. In: Frackowiak RSJ, Friston KJ, Frith C, Dolan R, Price CJ, Zeki S, Ashburner J, Penny WD, editors. Human brain function. Ed 2. San Diego: Academic; 2003.
    1. Calder AJ. Does facial identity and facial expression recognition involve separate visual routes? In: Calder AJ, Rhodes G, Johnson M, Haxby JV, editors. The Oxford handbook of face perception. Oxford: Oxford UP; 2011.
    1. Calder AJ, Young AW. Understanding recognition of facial identity and facial expression. Nat Rev Neurosci. 2005;6:641–651. doi: 10.1038/nrn1724. - DOI - PubMed

Publication types

LinkOut - more resources