How do humans group non-rigid objects in multiple object tracking?: Evidence from grouping by self-rotation
- PMID: 34921401
- DOI: 10.1111/bjop.12547
How do humans group non-rigid objects in multiple object tracking?: Evidence from grouping by self-rotation
Abstract
Previous studies on perceptual grouping found that people can use spatiotemporal and featural information to group spatially separated rigid objects into a unit while tracking moving objects. However, few studies have tested the role of objects' self-motion information in perceptual grouping, although it is of great significance to the motion perception in the three-dimensional space. In natural environments, objects always move in translation and rotation at the same time. The self-rotation of the objects seriously destroys objects' rigidity and topology, creates conflicting movement signals and results in crowding effects. Thus, this study sought to examine the specific role played by self-rotation information on grouping spatially separated non-rigid objects through a modified multiple object tracking (MOT) paradigm with self-rotating objects. Experiment 1 found that people could use self-rotation information to group spatially separated non-rigid objects, even though this information was deleterious for attentive tracking and irrelevant to the task requirements, and people seemed to use it strategically rather than automatically. Experiment 2 provided stronger evidence that this grouping advantage did come from the self-rotation per se rather than surface-level cues arising from self-rotation (e.g. similar 2D motion signals and common shapes). Experiment 3 changed the stimuli to more natural 3D cubes to strengthen the impression of self-rotation and again found that self-rotation improved grouping. Finally, Experiment 4 demonstrated that grouping by self-rotation and grouping by changing shape were statistically comparable but additive, suggesting that they were two different sources of the object information. Thus, grouping by self-rotation mainly benefited from the perceptual differences in motion flow fields rather than in deformation. Overall, this study is the first attempt to identify self-motion as a new feature that people can use to group objects in dynamic scenes and shed light on debates about what entities/units we group and what kinds of information about a target we process while tracking objects.
Keywords: additivity; common fate; grouping; multiple object tracking; non-rigid; self-rotation.
© 2021 The British Psychological Society.
Similar articles
-
Additivity of Feature-Based and Symmetry-Based Grouping Effects in Multiple Object Tracking.Front Psychol. 2016 May 4;7:657. doi: 10.3389/fpsyg.2016.00657. eCollection 2016. Front Psychol. 2016. PMID: 27199875 Free PMC article.
-
Perceptual transitions between object rigidity and non-rigidity: Competition and cooperation among motion energy, feature tracking, and shape-based priors.J Vis. 2024 Feb 1;24(2):3. doi: 10.1167/jov.24.2.3. J Vis. 2024. PMID: 38306112 Free PMC article.
-
Functional specialization for feature-based and symmetry-based groupings in multiple object tracking.Cortex. 2018 Nov;108:265-275. doi: 10.1016/j.cortex.2018.09.005. Epub 2018 Sep 18. Cortex. 2018. PMID: 30296615
-
Laminar cortical dynamics of visual form and motion interactions during coherent object motion perception.Spat Vis. 2007;20(4):337-95. doi: 10.1163/156856807780919000. Spat Vis. 2007. PMID: 17594799 Review.
-
Linking emergent phenomena and broken symmetries through one-dimensional objects and their dot/cross products.Rep Prog Phys. 2022 Nov 22;85(12). doi: 10.1088/1361-6633/ac97aa. Rep Prog Phys. 2022. PMID: 36198263 Review.
Cited by
-
Binocular vs. monocular 3D cues in multiple object tracking: expertise differences between soccer players and non-athletes.Cogn Res Princ Implic. 2025 Jul 26;10(1):43. doi: 10.1186/s41235-025-00658-x. Cogn Res Princ Implic. 2025. PMID: 40715911 Free PMC article.
References
REFERENCES
-
- Alvarez, G. A., & Cavanagh, P. (2004). The capacity of visual short-term memory is set both by visual information load and by number of objects. Psychological Science, 15(2), 106-111. https://doi.org/10.1111/j.0963-7214.2004.01502006.x
-
- Alvarez, G. A., & Franconeri, S. L. (2007). How many objects can you track?: Evidence for a resource-limited attentive tracking mechanism. Journal of Vision, 7(13), 1-10. https://doi.org/10.1167/7.13.14
-
- Bae, G. Y., & Flombaum, J. I. (2012). Close encounters of the distracting kind: Identifying the cause of visual tracking errors. Attention, Perception, & Psychophysics, 74(4), 703-715. https://doi.org/10.3758/s13414-011-0260-1
-
- Ben-Av, M. B., & Sagi, D. (1995). Perceptual grouping by similarity and proximity: Experimental results can be predicted by intensity autocorrelations. Vision Research, 35(6), 853-866. https://doi.org/10.1016/0042-6989(94)00173-J
-
- Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10(4), 433-436. https://doi.org/10.1163/156856897X00357
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources