Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comparative Study
. 2018 Jul 27;14(7):e1006110.
doi: 10.1371/journal.pcbi.1006110. eCollection 2018 Jul.

Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception

Affiliations
Comparative Study

Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception

Luigi Acerbi et al. PLoS Comput Biol. .

Abstract

The precision of multisensory perception improves when cues arising from the same cause are integrated, such as visual and vestibular heading cues for an observer moving through a stationary environment. In order to determine how the cues should be processed, the brain must infer the causal relationship underlying the multisensory cues. In heading perception, however, it is unclear whether observers follow the Bayesian strategy, a simpler non-Bayesian heuristic, or even perform causal inference at all. We developed an efficient and robust computational framework to perform Bayesian model comparison of causal inference strategies, which incorporates a number of alternative assumptions about the observers. With this framework, we investigated whether human observers' performance in an explicit cause attribution and an implicit heading discrimination task can be modeled as a causal inference process. In the explicit causal inference task, all subjects accounted for cue disparity when reporting judgments of common cause, although not necessarily all in a Bayesian fashion. By contrast, but in agreement with previous findings, data from the heading discrimination task only could not rule out that several of the same observers were adopting a forced-fusion strategy, whereby cues are integrated regardless of disparity. Only when we combined evidence from both tasks we were able to rule out forced-fusion in the heading discrimination task. Crucially, findings were robust across a number of variants of models and analyses. Our results demonstrate that our proposed computational framework allows researchers to ask complex questions within a rigorous Bayesian framework that accounts for parameter and model uncertainty.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Experiment layout.
A: Subjects were presented with visual (svis) and vestibular (svis) headings either in the same direction (C = 1) or in different directions (C = 2). In different sessions, subjects were asked to judge whether stimuli had the same cause (‘unity judgment’, explicit causal inference) or whether the vestibular heading was to the left or right of straight forward (‘inertial discrimination’, implicit causal inference). B: Distribution of stimuli used in the task. Mean stimulus direction was drawn from a discrete uniform distribution (−25°, −20°, −15°,…,25°). In 20% of the trials, svissvest (‘same’ trials, C = 1); in the other 80% (‘different’, C = 2), disparity was drawn from a discrete uniform distribution (±5°, ±10°, ±20°, ±40°), which led to a correlated pattern of heading directions svis and svest. Visual cue reliability cvis was also drawn randomly on each trial (high, medium, and low).
Fig 2
Fig 2. Observer models.
A: Observer models consist of three model factors: Causal inference strategy, Shape of sensory noise, and Type of prior over stimuli (see text). B: Graphical representation of the observer model. In the left panel (C = 1), the visual (svis) and vestibular (svest) heading direction have a single, common cause. In the right panel (C = 2), svis and svest have separate sources, although not necessarily statistically independent. The observer has access to noisy sensory measurements xvis, xvest, and knows the visual reliability level of the trial cvis. The observer is either asked to infer the causal structure (unity judgment, explicit causal inference), or whether the vestibular stimulus is rightward of straight ahead (inertial discrimination, implicit causal inference). Model factors affect different stages of the observer model: the strategy used to combine the two causal scenarios; the type of prior over stimuli pprior(svis, svest|C); and the shape of sensory noise distributions p(xvis|svis, cvis) and p(xvest|svest) (which affects equally both how noisy measurements are generated and the observer’s beliefs about such noise). C: Example decision boundaries for the Bay-X-E model (for the three reliability levels), and for the Fix model, for a representative observer. The observer reports ‘unity’ when the noisy measurements xvis, xvest fall within the boundaries. Note that the Bayesian decision boundaries expand with larger noise. Nonlinearities are due to the interaction between eccentricity-dependence of the noise and the prior (wiggles are due to the discrete empirical prior).
Fig 3
Fig 3. Explicit causal inference.
Results of the explicit causal inference (unity judgment) task. A: Proportion of ‘unity’ responses, as a function of stimulus disparity (difference between vestibular and visual heading direction), and for different levels of visual cue reliability. Bars are ±1 SEM across subjects. Unity judgments are modulated by stimulus disparity and visual cue reliability. B: Protected exceedance probability φ˜ and estimated posterior frequency (mean ± SD) of distinct model components for each model factor. Each factor also displays the Bayesian omnibus risk (BOR). C: Model fits of several models of interest (see text for details). Shaded areas are ±1 SEM of model predictions across subjects. Numbers on top right of each panel report the absolute goodness of fit.
Fig 4
Fig 4. Implicit causal inference.
Results of the implicit causal inference (left/right inertial discrimination) task. A: Vestibular bias as a function of co-presented visual heading direction svis, at different levels of visual reliability. Bars are ±1 SEM across subjects. The inset shows a cartoon of how the vestibular bias is computed as minus the point of subjective equality of the psychometric curves of left/right responses (L/R PSE) for vestibular stimuli svest, for a representative subject and for a fixed value of svis. The vestibular bias is strongly modulated by svis and its reliability. B: Protected exceedance probability φ˜ and estimated posterior frequency (mean ± SD) of distinct model components for each model factor. Each factor also displays the Bayesian omnibus risk (BOR). C: Model fits of several models of interests (see text for details). Shaded areas are ±1 SEM of model predictions across subjects. Numbers on top right of each panel report the absolute goodness of fit.
Fig 5
Fig 5. Posteriors over model parameters.
Each panel shows the marginal posterior distributions over a single parameter for each subject and task. Each line is an individual subject’s posterior (thick line: interquartile range; light line: 95% credible interval); different colors correspond to different tasks. For each subject and task, posteriors are marginalized over models according to their posterior probability (see Methods). For each parameter we report the across-tasks compatibility probability Cp, that is the (posterior) probability that subjects were best described by the assumption that parameter values were the same across separate tasks, above and beyond chance. The first two rows of parameters compute compatibility across all three tasks, whereas in the last row compatibility only includes the bisensory tasks (bisensory inertial discrimination and unity judgment), as these parameters are irrelevant for the unisensory task.
Fig 6
Fig 6. Joint fits.
Results of the joint fits across tasks. A: Protected exceedance probability φ˜ and estimated posterior frequency (mean ± SD) of distinct model components for each model factor. Each factor also displays the Bayesian omnibus risk (BOR). B: Joint model fits of the explicit causal inference (unity judgment) task, for different models of interest. Each panel shows the proportion of ‘unity’ responses, as a function of stimulus disparity and for different levels of visual reliability. Bars are ±1 SEM of data across subjects. Shaded areas are ±1 SEM of model predictions across subjects. Numbers on top right of each panel report the absolute goodness of fit across all tasks. C: Joint model fits of the implicit causal inference task, for the same models of panel B. Panels show vestibular bias as a function of co-presented visual heading direction svis, and for different levels of visual reliability. Bars are ±1 SEM of data across subjects. Shaded areas are ±1 SEM of model predictions across subjects.
Fig 7
Fig 7. Sensitivity analysis of factorial model comparison.
Protected exceedance probability φ˜ of distinct model components for each model factor in the joint fits. Each panel also shows the estimated posterior frequency (mean ± SD) of distinct model components, and the Bayesian omnibus risk (BOR). Each row represents a variant of the factorial comparison. 1st row: Main analysis (as per Fig 6A). 2nd row: Uses marginal likelihood as model comparison metric. 3rd row: Uses hyperprior α0 = 1 for the frequencies over models in the population (instead of a flat prior over model factors). 4th row: Uses ‘probability matching’ strategy for the Bayesian causal inference model (replacing model averaging). 5th row: Includes probability matching as a sub-factor of the Bayesian causal inference family (in addition to model averaging).

References

    1. Hillis JM, Ernst MO, Banks MS, Landy MS. Combining sensory information: Mandatory fusion within, but not between, senses. Science. 2002;298(5598):1627–1630. 10.1126/science.1075396 - DOI - PubMed
    1. Nardini M, Bedford R, Mareschal D. Fusion of visual cues is not mandatory in children. Proceedings of the National Academy of Sciences. 2010;107(39):17041–17046. 10.1073/pnas.1001699107 - DOI - PMC - PubMed
    1. Jacobs RA. Optimal integration of texture and motion cues to depth. Vision Research. 1999;39(21):3621–3629. 10.1016/S0042-6989(99)00088-7 - DOI - PubMed
    1. Ernst MO, Banks MS. Humans integrate visual and haptic information in a statistically optimal fashion. Nature. 2002;415(6870):429–433. 10.1038/415429a - DOI - PubMed
    1. Knill DC, Saunders JA. Do humans optimally integrate stereo and texture information for judgments of surface slant? Vision Research. 2003;43(24):2539–2558. 10.1016/S0042-6989(03)00458-9 - DOI - PubMed

Publication types

LinkOut - more resources