Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Jan 18;9(1):239.
doi: 10.1038/s41598-018-37429-2.

Temporal dynamics of access to amodal representations of category-level conceptual information

Affiliations

Temporal dynamics of access to amodal representations of category-level conceptual information

Elisa Leonardelli et al. Sci Rep. .

Abstract

Categories describe semantic divisions between classes of objects and category-based models are widely used for investigation of the conceptual system. One critical issue in this endeavour is the isolation of conceptual from perceptual contributions to category-differences. An unambiguous way to address this confound is combining multiple input-modalities. To this end, we showed participants person/place stimuli using name and picture modalities. Using multivariate methods, we searched for category-sensitive neural patterns shared across input-modalities and thus independent from perceptual properties. The millisecond temporal resolution of magnetoencephalography (MEG) allowed us to consider the precise timing of conceptual access and, by confronting latencies between the two modalities ("time generalization"), how latencies of processing depends on the input-modality. Our results identified category-sensitive conceptual representations common between modalities at three stages and that conceptual access for words was delayed by about 90 msec with respect to pictures. We also show that for pictures, the first conceptual pattern of activity (shared between both words and pictures) occurs as early as 110 msec. Collectively, our results indicated that conceptual access at the category-level is a multistage process and that different delays in access across these two input-modalities determine when these representations are activated.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Figure 1
Figure 1
Category-specific patterns of response results from comparison of within- and between-categories. Crucially, in the cross-modal analysis within- and between-categories correlations are calculated on subsets of data across modalities. In the results shown in (a–b-c-left), tasks have been analyzed separately and then averaged before statistical testing. (a) Person/Place specific information is robustly present for both modalities (all p-values < 0.001, Montecarlo cluster-corrected). Names (230 ms to 610 ms) and Pictures (100 ms, earliest statistical point, to 750 ms). (b) Quantifying Places/People related information across modality revealed an early, a medium and a late cluster (all p-values < 0.005, Montecarlo cluster-corrected, orange contours initial threshold p = 0.05, black p = 0.005). Significant category-sensitive conceptual representations are off the diagonal for names, with a mean delay of 90 ms respect to pictures. (c) Left: Using searchlight MVPA, we revealed most informative sensors for each temporal cluster in Right: No significant differences were evident between tasks (deep/shallow) in any of these clusters.
Figure 2
Figure 2
(a) In each trial a famous place or a famous person is presented. During the first two runs as names, then as pictures. Every block of 20 trials participants were instructed on the task they had to perform (deep or shallow semantic task). The tasks were presented in a randomly interleaved manner. (b) Schematic of the MVPA analysis approach, repeated for each subject, to create the Persons vs Places measure generalized across modalities and time. (1) For each condition and modality the pattern evoked across sensors is extracted at each time point (2) Category-selective patterns common across modalities are measured by comparing correlations within category but across modalities (c1/c2) and correlations across categories and across modalities (c3/c4). (3) When generalizing across time and modalities, each time point of one modality is compared with every time point of the other modality: The output is a matrix where on each axis the time of one modality is represented. When timeN = timeP (diagonal), simultaneous data points for Names and Pictures are confronted. If timeN > timeP, data points of modality names are confronted with data points occurring earlier for modality picture, meaning that in this quadrant Names are delayed with respect to Pictures. For timeN < timeP, the opposite is valid. Image of Big Ben https://www.pexels.com/photo/historical-ferris-wheel-tower-church-2212/. Image of Hillary Clinton, credit to US department of State on Visualhunt https://visualhunt.com/author/9cf212.

References

    1. Barsalou LW, Simmons WK, Barbey AK, Wilson CD. Grounding conceptual knowledge in modality-specific systems. Trends Cogn. Sci. 2003;7:84–91. doi: 10.1016/S1364-6613(02)00029-3. - DOI - PubMed
    1. Martin A. The Representation of Object Concepts in the Brain. Annu. Rev. Psychol. 2007;58:25–45. doi: 10.1146/annurev.psych.57.102904.190143. - DOI - PubMed
    1. Liuzzi AG, et al. Cross-modal representation of spoken and written word meaning in left pars triangularis. Neuroimage. 2017;150:292–307. doi: 10.1016/j.neuroimage.2017.02.032. - DOI - PubMed
    1. Handjaras G, et al. Modality-independent encoding of individual concepts in the left parietal cortex. Neuropsychologia. 2017;105:39–49. doi: 10.1016/j.neuropsychologia.2017.05.001. - DOI - PubMed
    1. Binder JR, Desai RH, Graves WW, Conant LL. Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cereb. Cortex. 2009;19:2767–2796. doi: 10.1093/cercor/bhp055. - DOI - PMC - PubMed

Publication types