Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2024 May-Jun;15(3):e1675.
doi: 10.1002/wcs.1675. Epub 2024 Jan 19.

Predicting attentional allocation in real-world environments: The need to investigate crossmodal semantic guidance

Affiliations
Review

Predicting attentional allocation in real-world environments: The need to investigate crossmodal semantic guidance

Kira Wegner-Clemens et al. Wiley Interdiscip Rev Cogn Sci. 2024 May-Jun.

Abstract

Real-world environments are multisensory, meaningful, and highly complex. To parse these environments in a highly efficient manner, a subset of this information must be selected both within and across modalities. However, the bulk of attention research has been conducted within sensory modalities, with a particular focus on vision. Visual attention research has made great strides, with over a century of research methodically identifying the underlying mechanisms that allow us to select critical visual information. Spatial attention, attention to features, and object-based attention have all been studied extensively. More recently, research has established semantics (meaning) as a key component to allocating attention in real-world scenes, with the meaning of an item or environment affecting visual attentional selection. However, a full understanding of how semantic information modulates real-world attention requires studying more than vision in isolation. The world provides semantic information across all senses, but with this extra information comes greater complexity. Here, we summarize visual attention (including semantic-based visual attention), crossmodal attention, and argue for the importance of studying crossmodal semantic guidance of attention. This article is categorized under: Psychology > Attention Psychology > Perception and Psychophysics.

Keywords: attention; attentional prioritization; crossmodal attention; semantics.

PubMed Disclaimer

References

FURTHER READING
    1. Castelhano, M. S., Mack, M. L., & Henderson, J. M. (2009). Viewing task influences eye movement control during active scene perception. Journal of Vision, 9(3), 6.1–6.15.
    1. Land, M. F., & Hayhoe, M. (2001). In what ways do eye movements contribute to everyday activities? Vision Research, 41(25–26), 3559–3565.
REFERENCES
    1. Almadori, E., Mastroberardino, S., Botta, F., Brunetti, R., Lupiáñez, J., Spence, C., & Santangelo, V. (2021). Crossmodal semantic congruence interacts with object contextual consistency in complex visual scenes to enhance short‐term memory performance. Brain Sciences, 11(9), 1206.
    1. Arcaro, M. J., McMains, S. A., Singer, B. D., & Kastner, S. (2009). Retinotopic organization of human ventral visual cortex. Journal of Neuroscience, 29(34), 10638–10652.
    1. Arrington, C. M., Carr, T. H., Mayer, A. R., & Rao, S. M. (2000). Neural mechanisms of visual attention: Object‐based selection of a region in space. Journal of Cognitive Neuroscience, 12(Suppl 2), 106–117.
    1. Ayzenberg, V., & Behrmann, M. (2022). Does the brain's ventral visual pathway compute object shape? Trends in Cognitive Sciences, 26(12), 1119–1132.
    1. Bar, M., & Aminoff, E. (2003). Cortical analysis of visual context. Neuron, 38(2), 347–358.

LinkOut - more resources