Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2015 Nov;19(11):677-687.
doi: 10.1016/j.tics.2015.08.008.

Understanding What We See: How We Derive Meaning From Vision

Affiliations
Review

Understanding What We See: How We Derive Meaning From Vision

Alex Clarke et al. Trends Cogn Sci. 2015 Nov.

Abstract

Recognising objects goes beyond vision, and requires models that incorporate different aspects of meaning. Most models focus on superordinate categories (e.g., animals, tools) which do not capture the richness of conceptual knowledge. We argue that object recognition must be seen as a dynamic process of transformation from low-level visual input through categorical organisation to specific conceptual representations. Cognitive models based on large normative datasets are well-suited to capture statistical regularities within and between concepts, providing both category structure and basic-level individuation. We highlight recent research showing how such models capture important properties of the ventral visual pathway. This research demonstrates that significant advances in understanding conceptual representations can be made by shifting the focus from studying superordinate categories to basic-level concepts.

Keywords: Concepts; category; fusiform gyrus; perirhinal cortex; semantics; ventral visual pathway.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Regions Supporting Conceptual Processing in the Anterior and Posterior Ventral Visual Pathway. Different subregions of the anterior temporal lobe are shown where the middle temporal gyrus (MTG) and inferior temporal gyrus (ITG) are relatively more lateral, the fusiform occupies a ventral position, and the perirhinal (PRC) and entorhinal cortex (ERC) are more medial in the anterior medial temporal cortex (reprinted from [43]).
Figure 2
Figure 2
The Nature of Category-Specific Deficits. (A) Drawings from patient SE of common objects of living and nonliving things, showing a clear absence of distinctive feature information for living things and a preservation of details for nonliving things. Nonliving objects, top left to bottom right; helicopter, chisel, anchor, windmill, bus. Living objects; crocodile, zebra, duck, penguin, camel. Reproduced from with permission from Taylor and Francis. (B) MRI scan from patient SE showing extensive damage in the right anterior temporal lobe (ATL; image shown in radiological convention, previously unpublished).
Figure 3
Figure 3
Conceptual Structure Effects In The Ventral Visual Pathway. (A) Conceptual structure statistics modulate activity in both the posterior and anterior-medial temporal lobe based on different feature-based statistics. Posterior fusiform activity increases in the lateral posterior fusiform for objects with relatively more shared features, and activity increases in the medial posterior fusiform for objects with relatively fewer shared features. Bilateral anteromedial temporal cortex (AMTC) activity increases for concepts that are semantically more-confusable (reproduced from with permission from MIT press). (B) Increasing damage to the perirhinal cortex (PRC) results in poorer performance for naming semantically more-confusable objects. This is shown by first correlating the naming accuracy of each patient with a conceptual structure measure for the ease of conceptual individuation. This correlation is then related to the degree of damage to the PRC (crosses denote left hemisphere damage; circles denote right hemisphere damage) (reprinted from [43]). (C) Pattern similarity in bilateral PRC is related to conceptual similarity based on semantic features. Semantic similarity can be defined based on overlapping semantic features between concepts, where concepts both cluster by superordinate category and show within-category variability. Testing the relationship between semantic feature similarity and pattern similarity in the brain shows that bilateral PRC similarity patterns also show a clustering by superordinate category and, crucially, within-category differentiation aligned to conceptual similarity (reprinted from with permission from the Society for Neuroscience). (D) The timecourse of superordinate category and basic-level concept information shown with magnetoencephalography (MEG). Using multiple linear regression we can learn how to map between the recorded MEG data and the visual and semantic measures for different objects. After showing how well this model can explain the observed neural data, we asked how accurately the model could predict MEG data for new objects. This showed than the superordinate category of an object can be successfully predicted before the prediction of the basic-level concept (after accounting for the influence of visual statistics) (reprinted from with permission from Oxford University Press).

References

    1. Jolicoeur P. Pictures and names: making the connection. Cogn. Psychol. 1984;16:243–275. - PubMed
    1. Rosch E. Basic objects in natural categories. Cogn. Psychol. 1976;8:382–439.
    1. DiCarlo J.J. How does the brain solve visual object recognition? Neuron. 2012;73:415–434. - PMC - PubMed
    1. Kay K. Identifying natural images from human brain activity. Nature. 2008;452:352–356. - PMC - PubMed
    1. Krizhevsky A. MIT Press; 2012. ImageNet Classification with Deep Convolutional Neural Networks (Advances in Neural Information Processing Vol. 25)

Publication types

LinkOut - more resources