Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Sep 4;11(9):ENEURO.0357-23.2024.
doi: 10.1523/ENEURO.0357-23.2024. Print 2024 Sep.

Different Sensory Information Is Used for State Estimation when Stationary or Moving

Affiliations

Different Sensory Information Is Used for State Estimation when Stationary or Moving

Aaron L Wong et al. eNeuro. .

Abstract

The accurate estimation of limb state is necessary for movement planning and execution. While state estimation requires both feedforward and feedback information, we focus here on the latter. Prior literature has shown that integrating visual and proprioceptive feedback improves estimates of static limb position. However, differences in visual and proprioceptive feedback delays suggest that multisensory integration could be disadvantageous when the limb is moving. We formalized this hypothesis by modeling feedback-based state estimation using the long-standing maximum likelihood estimation model of multisensory integration, which we updated to account for sensory delays. Our model predicted that the benefit of multisensory integration was largely lost when the limb was passively moving. We tested this hypothesis in a series of experiments in human subjects that compared the degree of interference created by discrepant visual or proprioceptive feedback when estimating limb position either statically at the end of the movement or dynamically at movement midpoint. In the static case, we observed significant interference: discrepant feedback in one modality systematically biased sensory estimates based on the other modality. However, no interference was seen in the dynamic case: participants could ignore sensory feedback from one modality and accurately reproduce the motion indicated by the other modality. Together, these findings suggest that the sensory feedback used to compute a state estimate differs depending on whether the limb is stationary or moving. While the former may tend toward multimodal integration, the latter is more likely to be based on feedback from a single sensory modality.

Keywords: feedback delays; multisensory integration; proprioception; state estimation; vision.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing financial interests.

Figures

Figure 1.
Figure 1.
Schematic of the paradigm for Experiment 1. A, The static estimation task had participants reproduce a stationary position in the task space. The target position was either cued visually with a circle appearing on the display (V trial), or it was cued proprioceptively as the position of the index fingertip at the endpoint of a random passive movement (P trial). The task began with a baseline block of unimodal (V and P) trials, followed by a block of both unimodal and bimodal (VP) trials. In all trials, the visual target was located 6 cm further from the participant than the proprioceptive target. In VP trials, participants were presented with both visual and proprioceptive cues. They were informed of the offset between cues and were instructed (prior to the trial) as to which sensory modality they should attend and report. Report accuracy feedback was provided only on unimodal V and P trials. B, The dynamic estimation task had participants reproduce a curved movement trajectory in the task space. The trajectory was cued either visually with the movement of a circular cursor (V trial), or it was cued proprioceptively as the path traveled by the index fingertip during passive movement of the arm (P trial). The task began with a baseline block of unimodal V and P trials, which was followed by a test block that included both unimodal and bimodal (VP) trials. In all trials, the visual and proprioceptive trajectories comprised opposing curves offset by 6 cm at the midpoint. As in the static estimation task, participants were informed of the offset and were instructed prior to VP trials as to which sensory modality they should attend and report. Midpoint accuracy feedback was provided only on unimodal trials.
Figure 2.
Figure 2.
Model results. A, Updated MLE model accounting for limb movement and sensory delays. If the limb moves with speed, ν, at time, t, we have delayed visual (μτv, red dashed line) and proprioceptive (μτp, blue dashed line) feedback of the limb from sometime in the past instead of at its current position (red and blue solid lines). This leads to an erroneous bimodal state estimate (μB, purple dashed line) that is offset from the ideal bimodal estimate ( μB^, purple solid line) by some error (purple arrow). B, Simulation results suggest that as movement speed (ν) increases, the probability of obtaining an accurate bimodal sensory estimate decreases; in such cases, it may be better to use a unimodal state estimate. However, this also depends on the relative uncertainties of the two sensory modalities (top panel) and the relative time delay between the two modalities (bottom panel). C, Observed data from Experiment 1 were used to estimate the bimodal variance via the MLE combination of the observed unimodal sensory variances. For the static estimation task (left), no relationship was observed between the bimodal variance and the observed bias. In contrast, in the dynamic estimation task (right), there was a clear correlation across individuals. Consistent with the model, this suggests that individuals with greater bimodal variance in the dynamic estimation task likely relied on an integrated state estimate and were more susceptible to discrepant information from the other sensory modality.
Figure 3.
Figure 3.
Results from the static and dynamic estimation tasks of Experiment 1. A, Data from a single participant in the static estimation task performing baseline unimodal trials (left panel) and bimodal trials (right panel). Thin lines reflect individual trials, and thick lines reflect the average movement in each condition. The cued endpoint location is indicated by a black x. B, The average reported endpoint position for each participant (thin lines) is shown along with the group mean (thick lines) for visual (V and VPV, red) and proprioceptive (P and VPP, blue) trials. C, Data from the same participant depicted in panel A performing unimodal (left) and bimodal (right) trials of the dynamic estimation task. D, Group data for the dynamic task akin to panel B. All units are in meters.
Figure 4.
Figure 4.
Report biases in Experiment 1. A, Across individuals, there was no relationship between the biases observed when reporting visual and proprioceptive cues in either the static (left panel) or dynamic (right panel) estimation tasks. B, Across the static and dynamic estimation tasks, there were no relationships between the biases observed on bimodal trials in response to the visual cue (left panel) or in response to the proprioceptive cue (right panel).
Figure 5.
Figure 5.
Histograms of average individual shift in report error for unimodal trials in Experiment 1. In the static estimation task (left panel), participants exhibited a shift in report error on unimodal trials in the context of conflicting bimodal sensory cues relative to the baseline. In contrast, unimodal trials in the dynamic estimation task (right panel) exhibited no such shift when bimodal trials were introduced.
Figure 6.
Figure 6.
Report errors in Experiments 2 and 3. A, In Experiment 2, participants completed a static estimation task in which the sensory cues were displaced orthogonal to the primary direction of movement. Data from a single subject performing unimodal V and P trials (left panel) and bimodal VPV and VPP trials (right panel) are shown. Thick lines reflect the average movement for each condition across trials. The cued endpoint location is indicated by a black x. B, The average reported endpoint positions for each participant in Experiment 2 (thin lines) are shown along with the group mean (thick lines), for visual (V and VPV, red) and proprioceptive (P and VPP, blue) trials. Overall, participants exhibited a bias in report error on bimodal trials relative to unimodal trials when reporting the location of both visual and proprioceptive cues. C, In Experiment 3, participants completed a dynamic estimation task in which the sensory cues were displaced on the same side of the midline, causing movement paths to be curved in the same direction. Single-subject data are shown for unimodal (left panel) and bimodal trials (right panel). The cued midpoint location is indicated by a black x. D, Average reported midpoints for each participant (thin lines) are shown along with the group mean (thick lines). Participants did not exhibit any significant bias in report error on bimodal trials relative to unimodal trials when responding to either the visual or proprioceptive cues. All units are in meters.

Update of

Similar articles

References

    1. Bair W, Cavanaugh JR, Smith MA, Movshon JA (2002) The timing of response onset and offset in macaque visual neurons. J Neurosci 22:3189–3205. 10.1523/JNEUROSCI.22-08-03189.2002 - DOI - PMC - PubMed
    1. Bates D, Maechler M, Bolker B, Walker S (2015) Fitting linear mixed-effects models using lme4. J Stat Softw 67:1–48. 10.18637/jss.v067.i01 - DOI
    1. Berniker M, Kording K (2011) Bayesian approaches to sensory integration for motor control. WIREs Cogn Sci 2:419–428. 10.1002/wcs.125 - DOI - PubMed
    1. Block HJ, Bastian AJ (2010) Sensory reweighting in targeted reaching: effects of conscious effort, error history, and target salience. J Neurophysiol 103:206–217. 10.1152/jn.90961.2008 - DOI - PMC - PubMed
    1. Block HJ, Bastian AJ (2011) Sensory weighting and realignment: independent compensatory processes. J Neurophysiol 106:59–70. 10.1152/jn.00641.2010 - DOI - PMC - PubMed

LinkOut - more resources