Predicting attentional allocation in real-world environments: The need to investigate crossmodal semantic guidance
- PMID: 38243393
- DOI: 10.1002/wcs.1675
Predicting attentional allocation in real-world environments: The need to investigate crossmodal semantic guidance
Abstract
Real-world environments are multisensory, meaningful, and highly complex. To parse these environments in a highly efficient manner, a subset of this information must be selected both within and across modalities. However, the bulk of attention research has been conducted within sensory modalities, with a particular focus on vision. Visual attention research has made great strides, with over a century of research methodically identifying the underlying mechanisms that allow us to select critical visual information. Spatial attention, attention to features, and object-based attention have all been studied extensively. More recently, research has established semantics (meaning) as a key component to allocating attention in real-world scenes, with the meaning of an item or environment affecting visual attentional selection. However, a full understanding of how semantic information modulates real-world attention requires studying more than vision in isolation. The world provides semantic information across all senses, but with this extra information comes greater complexity. Here, we summarize visual attention (including semantic-based visual attention), crossmodal attention, and argue for the importance of studying crossmodal semantic guidance of attention. This article is categorized under: Psychology > Attention Psychology > Perception and Psychophysics.
Keywords: attention; attentional prioritization; crossmodal attention; semantics.
© 2024 Wiley Periodicals LLC.
References
FURTHER READING
-
- Castelhano, M. S., Mack, M. L., & Henderson, J. M. (2009). Viewing task influences eye movement control during active scene perception. Journal of Vision, 9(3), 6.1–6.15.
-
- Land, M. F., & Hayhoe, M. (2001). In what ways do eye movements contribute to everyday activities? Vision Research, 41(25–26), 3559–3565.
REFERENCES
-
- Almadori, E., Mastroberardino, S., Botta, F., Brunetti, R., Lupiáñez, J., Spence, C., & Santangelo, V. (2021). Crossmodal semantic congruence interacts with object contextual consistency in complex visual scenes to enhance short‐term memory performance. Brain Sciences, 11(9), 1206.
-
- Arcaro, M. J., McMains, S. A., Singer, B. D., & Kastner, S. (2009). Retinotopic organization of human ventral visual cortex. Journal of Neuroscience, 29(34), 10638–10652.
-
- Arrington, C. M., Carr, T. H., Mayer, A. R., & Rao, S. M. (2000). Neural mechanisms of visual attention: Object‐based selection of a region in space. Journal of Cognitive Neuroscience, 12(Suppl 2), 106–117.
-
- Ayzenberg, V., & Behrmann, M. (2022). Does the brain's ventral visual pathway compute object shape? Trends in Cognitive Sciences, 26(12), 1119–1132.
-
- Bar, M., & Aminoff, E. (2003). Cortical analysis of visual context. Neuron, 38(2), 347–358.
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources