Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Jul 3;44(27):e0326242024.
doi: 10.1523/JNEUROSCI.0326-24.2024.

MEG Evidence That Modality-Independent Conceptual Representations Contain Semantic and Visual Features

Affiliations

MEG Evidence That Modality-Independent Conceptual Representations Contain Semantic and Visual Features

Julien Dirani et al. J Neurosci. .

Abstract

The semantic knowledge stored in our brains can be accessed from different stimulus modalities. For example, a picture of a cat and the word "cat" both engage similar conceptual representations. While existing research has found evidence for modality-independent representations, their content remains unknown. Modality-independent representations could be semantic, or they might also contain perceptual features. We developed a novel approach combining word/picture cross-condition decoding with neural network classifiers that learned latent modality-independent representations from MEG data (25 human participants, 15 females, 10 males). We then compared these representations to models representing semantic, sensory, and orthographic features. Results show that modality-independent representations correlate both with semantic and visual representations. There was no evidence that these results were due to picture-specific visual features or orthographic features automatically activated by the stimuli presented in the experiment. These findings support the notion that modality-independent concepts contain both perceptual and semantic representations.

Keywords: MEG; concepts; lexical; modality; semantic; visual.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing financial interests.

Figures

Figure 1.
Figure 1.
Analysis pipeline. (1) Cross-condition decoding in which neural network classifiers trained on one modality were tested on the other modality, for all pairs of timepoints. This allowed us to map the clusters of timepoints (ttrain, ttest) where modality-independent representations of basic-level concepts were activated. (2) Classifiers with successful cross-condition generalization were assumed to have learned latent representations of the semantic space that were modality-independent. We investigated the representational content of those representations using RSA and compared them to three hypotheses spaces. We also compared them to the ResNet embeddings of the picture stimuli and the orthographic features of the words to test whether the shared representations between pictures and words did not merely result from an automatic reactivation of stimulus-specific representations.
Figure 2.
Figure 2.
The activation timing of modality-independent representations of basic-level concepts. A,B, Accuracy scores at each timepoint for classifiers trained and tested within each modality. The shaded regions indicate timepoints where classifier accuracy was above chance at the group level. C, Cross-condition decoding results where models trained on the MEG data from the words were tested on MEG data from the pictures for all pairs of timepoints (tword, tpicture). The contour plot indicates the cluster of timepoints with accuracy scores significantly above chance. Modality-independent representations were active at ∼250 ms and sustained until ∼600 ms after the stimulus onset. The part of the cluster that is off-diagonal indicates that representations that were active earlier in the pictures (∼100–300 ms) were delayed in the words (∼400–600 ms). Models trained on MEG data from the pictures and tested on the words did not significantly surpass chance-level accuracy.
Figure 3.
Figure 3.
RSA results investigating the content of modality-independent representations. For each pair of timepoints (tword, tpicture) where modality-independent representations were identified (see cluster result in Fig. 2), we investigated their content using RSA. The contour plots indicate clusters of timepoints where modality-independent representations significantly correlated with the hypothesis. The gray area represents points outside the cluster of modality-independent representations and contains no data. A,B, Modality-independent representations significantly correlated with the semantic features and the visual features hypothesis. Semantic features had a widespread correlation over most of the cluster, while visual representations appear qualitatively constrained to the part of the cluster falling around the diagonal. C, Modality-independent representations did not correlate with auditory features. D,E, Modality-independent representations did not correlate with ResNet embeddings or orthographic features, suggesting that shared representations between pictures and words did not merely result from an automatic reactivation of stimulus-specific representations.

References

    1. Adachi Y, Shimogawara M, Higuchi M, Haruta Y, Ochiai M (2001) Reduction of non-periodic environmental magnetic noise in MEG measurement by continuously adjusted least squares method. IEEE Trans Appl Supercond 11:669–672. 10.1109/77.919433 - DOI
    1. Akama H, Murphy B, Na L, Shimizu Y, Poesio M (2012) Decoding semantics across fMRI sessions with different stimulus modalities: a practical MVPA study. Front Neuroinform 6:24. 10.3389/fninf.2012.00024 - DOI - PMC - PubMed
    1. Amsel BD, Urbach TP, Kutas M (2013) Alive and grasping: stable and rapid semantic access to an object category but not object graspability. Neuroimage 77:1–13. 10.1016/j.neuroimage.2013.03.058 - DOI - PMC - PubMed
    1. Balota D, Yap M, Cortese M, Hutchison K, Kessler B, Loftis B, Neely J, Nelson D, Simpson G, Treiman R (2007) The English lexicon project. Behav Res Methods 39:445–459. 10.3758/BF03193014 - DOI - PubMed
    1. Barsalou LW (1999) Perceptual symbol systems. Behav Brain Sci 22:577–660. 10.1017/S0140525X99002149 - DOI - PubMed

LinkOut - more resources