Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 May 17;43(20):3733-3742.
doi: 10.1523/JNEUROSCI.1874-22.2023. Epub 2023 Apr 14.

Conceptual Associations Generate Sensory Predictions

Affiliations

Conceptual Associations Generate Sensory Predictions

Chuyao Yan et al. J Neurosci. .

Abstract

A crucial ability of the human brain is to learn and exploit probabilistic associations between stimuli to facilitate perception and behavior by predicting future events. Although studies have shown how perceptual relationships are used to predict sensory inputs, relational knowledge is often between concepts rather than percepts (e.g., we learned to associate cats with dogs, rather than specific images of cats and dogs). Here, we asked if and how sensory responses to visual input may be modulated by predictions derived from conceptual associations. To this end we exposed participants of both sexes to arbitrary word-word pairs (e.g., car-dog) repeatedly, creating an expectation of the second word, conditional on the occurrence of the first. In a subsequent session, we exposed participants to novel word-picture pairs, while measuring fMRI BOLD responses. All word-picture pairs were equally likely, but half of the pairs conformed to the previously formed conceptual (word-word) associations, whereas the other half violated this association. Results showed suppressed sensory responses throughout the ventral visual stream, including early visual cortex, to pictures that corresponded to the previously expected words compared with unexpected words. This suggests that the learned conceptual associations were used to generate sensory predictions that modulated processing of the picture stimuli. Moreover, these modulations were tuning specific, selectively suppressing neural populations tuned toward the expected input. Combined, our results suggest that recently acquired conceptual priors are generalized across domains and used by the sensory brain to generate category-specific predictions, facilitating processing of expected visual input.SIGNIFICANCE STATEMENT Perceptual predictions play a crucial role in facilitating perception and the integration of sensory information. However, little is known about whether and how the brain uses more abstract, conceptual priors to form sensory predictions. In our preregistered study, we show that priors derived from recently acquired arbitrary conceptual associations result in category-specific predictions that modulate perceptual processing throughout the ventral visual hierarchy, including early visual cortex. These results suggest that the predictive brain uses prior knowledge across various domains to modulate perception, thereby extending our understanding of the extensive role predictions play in perception.

Keywords: conceptual associations; expectation suppression; perception; predictive processing.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Experimental paradigm. A, A trial of the learning session. Two object words were presented sequentially, without interstimulus interval (ISI), each lasting 500 ms. The first word probabilistically predicted the second word. Each trial ended with a 1–3 s ITI. B, A trial of t>he generalization session. Like the learning session, two objects were presented, but the trailing object word was replaced by a corresponding object image. Crucially, during this session the trailing object images were not predictable given the leading word. The leading word and trailing images were presented sequentially for 500 ms each, without ISI, followed by a 3–15 s ITI. C, The transitional probability matrix of the learning session, determining the associations between word pairs. L1 to L8 represent leading words and T1 to T8 represent the trailing words. Green labels indicate that the word refers to a living object, whereas red indicates a nonliving object. Blue and brown cells denote expected and unexpected word pairs, respectively. The number inside each cell indicates the number of trials in the corresponding conditions per run. D, The transitional probability matrix of the generalization session. The matrix was identical to the learning session except for three changes. First, T1–T8 represents trailing images instead of words. Moreover, a no-go condition was added in which the trailing images were of the same object as the leading words. Finally, the leading words were no longer predictive of the specific trailing stimulus; instead, one of two equiprobable trailing images were associated with each leading word, one of which was the previously expected object category. Thus, the blue cells represent the object images corresponding to the (previously) expected words, whereas the brown cells represent object images that correspond to the unexpected words.
Figure 2.
Figure 2.
Word–word associations aid in the classification of category congruency for word–word pairs. A, Behavioral benefits of prediction (expectation) for the word pairs indicates the learning of word associations during the learning session. Responses to expected trailing words were significantly more accurate (top) and faster (bottom) compared with unexpected words that required the same (Unexpected-S) or different response (Unexpected-D). B, Development of behavioral benefits of prediction during the learning session. Responses to expected trailing words were more accurate (top) and faster (bottom) compared with unexpected words that required the same (Unexpected-S) or different response (Unexpected-D) across all learning blocks, demonstrating rapid learning. Error bars indicate within-subject SE.
Figure 3.
Figure 3.
Word–word associations facilitate behavioral responses to corresponding word–image pairs. A, Behavioral performance for the category classification task during the generalization (MRI) session. Leading words were not predictive of the trailing images in the generalization session. Therefore, any behavioral benefits of prediction must have been derived from the word–word associations learned during the learning session. Responses were highly accurate (left) and did not differ between expectation conditions. RTs (right) were significantly faster to expected trailing object images compared to unexpected object images, indicating the generalization of associations from the word-word pairs to the word-image pairs. B, RTs to expected (blue) and unexpected (brown) object images for early runs (Run 1+2) and late runs (Run 3+4). Error bars indicate within-subject SE; **p < 0.01.
Figure 4.
Figure 4.
Expectation suppression across the ventral visual stream. A, Three anatomical masks in the ventral visual pathway, EVC (top), object-selective LOC (middle), and VTC (bottom). These anatomical masks were further constrained per participant using independent localizer data (see above, Materials and Methods, ROI definition). B, Averaged BOLD responses (parameter estimates) to expected (blue) and unexpected (brown) object images within EVC, LOC, and VTC. In all three ROIs, BOLD responses were significantly suppressed to the expected compared with unexpected object images. Error bars indicate within-subject SE; **p < 0.01; p values were adjusted for three comparisons (ROIs) using FDR correction. C, Expectation suppression revealed by whole-brain analysis. Color represents the parameter estimates for the contrast expected minus unexpected, displayed on the MNI-152 template brain. Blue clusters represent decreased activity for expected compared with unexpected object images. Opacity indicates the z statistics of the contrast. Black contours outline statistically significant clusters (Gaussian random field cluster corrected). Significant clusters were observed in the ventral visual stream, including EVC, LOC, and VTC.
Figure 5.
Figure 5.
Expectation suppression during early and late runs. Averaged BOLD responses to expected (blue) and unexpected (brown) object images for early runs (Run 1+2) and late runs (Run 1+2) within EVC (left), LOC (middle), and VTC (right). Across all three ROIs expectation suppression did not significantly extinguish over time (runs). Error bars indicate within-subject SE; *p < 0.05, **p < 0.01, ***p < 0.001 (FDR corrected).
Figure 6.
Figure 6.
Expectation suppression only for preferred object stimuli. BOLD responses to expected (blue) and unexpected (brown) object images for preferred and nonpreferred stimuli within EVC (left), LOC (middle), and VTC (right). In all three ROIs, BOLD responses were suppressed to expected object images exclusively when the object category was preferred. BOLD responses did not differ between expected and unexpected images for nonpreferred object images. Error bars indicate within-subject SE; *p < 0.05, ***p < 0.001 (FDR corrected), = BF10 < 1/3.

Similar articles

Cited by

References

    1. Abraham A, Pedregosa F, Eickenberg M, Gervais P, Mueller A, Kossaifi J, Gramfort A, Thirion B, Varoquaux G (2014) Machine learning for neuroimaging with scikit-learn. Front Neuroinform 8:14. 10.3389/fninf.2014.00014 - DOI - PMC - PubMed
    1. Alink A, Blank H (2021) Can expectation suppression be explained by reduced attention to predictable stimuli? Neuroimage 231:117824. 10.1016/j.neuroimage.2021.117824 - DOI - PubMed
    1. Bar M (2007) The proactive brain: using analogies and associations to generate predictions. Trends Cogn Sci 11:280–289. 10.1016/j.tics.2007.05.005 - DOI - PubMed
    1. Benjamini Y, Hochberg Y (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing. J Roy Stat Soc Series B Stat Methodol 57:289–300. 10.1111/j.2517-6161.1995.tb02031.x - DOI
    1. Brainard DH (1997) The Psychophysics Toolbox. Spatial Vis 10:433–436. 10.1163/156856897X00357 - DOI - PubMed

Publication types