Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Mar 1;24(3):2.
doi: 10.1167/jov.24.3.2.

Human estimates of descending objects' motion are more accurate than those of ascending objects regardless of gravity information

Affiliations

Human estimates of descending objects' motion are more accurate than those of ascending objects regardless of gravity information

Takashi Hirata et al. J Vis. .

Abstract

Humans can accurately estimate and track object motion, even if it accelerates. Research shows that humans exhibit superior estimation and tracking performance for descending (falling) than ascending (rising) objects. Previous studies presented ascending and descending targets along the gravitational and body axes in an upright posture. Thus, it is unclear whether humans rely on congruent information between the direction of the target motion and gravity or the direction of the target motion and longitudinal body axes. Two experiments were conducted to explore these possibilities. In Experiment 1, participants estimated the arrival time at a goal for both upward and downward motion of targets along the longitudinal body axis in the upright (both axes of target motion and gravity congruent) and supine (both axes incongruent) postures. In Experiment 2, smooth pursuit eye movements were assessed while tracking both targets in the same postures. Arrival time estimation and smooth pursuit eye movement performance were consistently more accurate for downward target motion than for upward motion, irrespective of posture. These findings suggest that the visual experience of seeing an object moving along an observer's leg side in everyday life may influence the ability to accurately estimate and track the descending object's motion.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Experimental design. (A) Experimental setup and postures. The purple and green lines represent the Earth's vector and longitudinal body axes, respectively. The magenta line indicates the axis of the visual target stimulus. (B) VR scenes depicting upward motion (left) and downward motion (right) with the target trajectory indicated by the magenta line. (C) Target acceleration pattern in Experiment 1. The x axis represents the present time for the targets in upward and downward motion scenes, whereas the y axis represents the target position (above) and target velocity (below). The blue, green, and red traces correspond with 1 G, 0 G, and −1 G, respectively. (D) Experimental schedule in Experiments 1 and 2.
Figure 2.
Figure 2.
TD for the upward and downward motion of targets in upright and supine postures. TD for the upward and downward motion of targets in upright and supine postures under the 1 G, 0 G, and −1 G conditions. The red and blue bars represent the mean TDs for the upward and downward motion conditions, respectively. Each 1 G, 0 G, and −1 G figure's left and right side depict the upright and supine conditions, respectively. Error bars indicate 1 standard error.
Figure 3.
Figure 3.
Vertical eye movement while a participant tracked the downward motion of a 1 G target in an upright posture. The horizontal axis represents the time after the beginning of the target motion, and the vertical axis represents the position (°). The black line shows the position of the 1 G target stimulus. The blue portion of the line shows SPEMs, and the gray portion shows saccade movements detected by the desaccading algorithm.
Figure 4.
Figure 4.
SPEM position difference under each experimental condition. The average SPEM position difference during target moving, with enlarged data between 1.09 and 1.19 seconds (a, b, c, d). The horizontal axis represents the time after the beginning of the target motion, and the vertical axis represents the position difference (°). The red and blue lines represent the mean SPEM position difference for the upward and downward motion conditions, respectively. Shadows indicate standard errors. (A) The 1 G upright condition. (B) The 1 G supine condition. (C) The 0 G upright condition. (D) The 0 G supine condition.
Figure 5.
Figure 5.
SPEM position difference for the upward and downward motion of targets in upright and supine postures. The average SPEM position difference for the upward (red) and downward (blue) motion of targets under 1 G and 0 G conditions. The left side of each 1 G and 0 G figure shows the upright condition, and the right side shows the supine condition. Error bars indicate one standard error.

Similar articles

Cited by

References

    1. Akao, T., Kumakura, Y., Kurkin, S., Fukushima, J., & Fukushima, K. (2007). Directional asymmetry in vertical smooth-pursuit and cancellation of the vertical vestibulo-ocular reflex in juvenile monkeys. Experimental Brain Research , 182(4), 469–478, 10.1007/s00221-007-1005-1. - DOI - PubMed
    1. Angelaki, D. E., McHenry, M. Q., Dickman, J. D., Newlands, S. D., & Hess, B. J. (1999). Computation of inertial motion: Neural strategies to resolve ambiguous otolith information. Journal of Neuroscience , 19(1), 316–327, 10.1523/JNEUROSCI.19-01-00316.1999. - DOI - PMC - PubMed
    1. Baurès, R., & Hecht, H. (2011). The effect of body posture on long-range time-to-contact estimation. Perception , 40(6), 674–681, 10.1068/p6945. - DOI - PubMed
    1. Benguigui, N., Ripoll, H., & Broderick, M. P. (2003). Time-to-contact estimation of accelerated stimuli is based on first-order information. Journal of Experimental Psychology: Human Perception and Performance, 29(6), 1083–1101, 10.1037/0096-1523.29.6.1083. - DOI - PubMed
    1. Bennett, S. J., Baures, R., Hecht, H., & Benguigui, N. (2010). Eye movements influence the estimation of time-to-contact in prediction motion. Experimental Brain Research, 206(4), 399–407, 10.1007/s00221-010-2416-y. - DOI - PubMed