Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Aug;113(3):653-676.
doi: 10.1111/bjop.12547. Epub 2021 Dec 17.

How do humans group non-rigid objects in multiple object tracking?: Evidence from grouping by self-rotation

Affiliations

How do humans group non-rigid objects in multiple object tracking?: Evidence from grouping by self-rotation

Luming Hu et al. Br J Psychol. 2022 Aug.

Abstract

Previous studies on perceptual grouping found that people can use spatiotemporal and featural information to group spatially separated rigid objects into a unit while tracking moving objects. However, few studies have tested the role of objects' self-motion information in perceptual grouping, although it is of great significance to the motion perception in the three-dimensional space. In natural environments, objects always move in translation and rotation at the same time. The self-rotation of the objects seriously destroys objects' rigidity and topology, creates conflicting movement signals and results in crowding effects. Thus, this study sought to examine the specific role played by self-rotation information on grouping spatially separated non-rigid objects through a modified multiple object tracking (MOT) paradigm with self-rotating objects. Experiment 1 found that people could use self-rotation information to group spatially separated non-rigid objects, even though this information was deleterious for attentive tracking and irrelevant to the task requirements, and people seemed to use it strategically rather than automatically. Experiment 2 provided stronger evidence that this grouping advantage did come from the self-rotation per se rather than surface-level cues arising from self-rotation (e.g. similar 2D motion signals and common shapes). Experiment 3 changed the stimuli to more natural 3D cubes to strengthen the impression of self-rotation and again found that self-rotation improved grouping. Finally, Experiment 4 demonstrated that grouping by self-rotation and grouping by changing shape were statistically comparable but additive, suggesting that they were two different sources of the object information. Thus, grouping by self-rotation mainly benefited from the perceptual differences in motion flow fields rather than in deformation. Overall, this study is the first attempt to identify self-motion as a new feature that people can use to group objects in dynamic scenes and shed light on debates about what entities/units we group and what kinds of information about a target we process while tracking objects.

Keywords: additivity; common fate; grouping; multiple object tracking; non-rigid; self-rotation.

PubMed Disclaimer

Similar articles

Cited by

References

REFERENCES

    1. Alvarez, G. A., & Cavanagh, P. (2004). The capacity of visual short-term memory is set both by visual information load and by number of objects. Psychological Science, 15(2), 106-111. https://doi.org/10.1111/j.0963-7214.2004.01502006.x
    1. Alvarez, G. A., & Franconeri, S. L. (2007). How many objects can you track?: Evidence for a resource-limited attentive tracking mechanism. Journal of Vision, 7(13), 1-10. https://doi.org/10.1167/7.13.14
    1. Bae, G. Y., & Flombaum, J. I. (2012). Close encounters of the distracting kind: Identifying the cause of visual tracking errors. Attention, Perception, & Psychophysics, 74(4), 703-715. https://doi.org/10.3758/s13414-011-0260-1
    1. Ben-Av, M. B., & Sagi, D. (1995). Perceptual grouping by similarity and proximity: Experimental results can be predicted by intensity autocorrelations. Vision Research, 35(6), 853-866. https://doi.org/10.1016/0042-6989(94)00173-J
    1. Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10(4), 433-436. https://doi.org/10.1163/156856897X00357

LinkOut - more resources