Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2014 Jan;10(1):e1003441.
doi: 10.1371/journal.pcbi.1003441. Epub 2014 Jan 23.

VBA: a probabilistic treatment of nonlinear models for neurobiological and behavioural data

Affiliations

VBA: a probabilistic treatment of nonlinear models for neurobiological and behavioural data

Jean Daunizeau et al. PLoS Comput Biol. 2014 Jan.

Abstract

This work is in line with an on-going effort tending toward a computational (quantitative and refutable) understanding of human neuro-cognitive processes. Many sophisticated models for behavioural and neurobiological data have flourished during the past decade. Most of these models are partly unspecified (i.e. they have unknown parameters) and nonlinear. This makes them difficult to peer with a formal statistical data analysis framework. In turn, this compromises the reproducibility of model-based empirical studies. This work exposes a software toolbox that provides generic, efficient and robust probabilistic solutions to the three problems of model-based analysis of empirical data: (i) data simulation, (ii) parameter estimation/model selection, and (iii) experimental design optimization.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Figure 1
Figure 1. The experimental cycle.
The experimental cycle summarizes the interaction between modelling, experimental work and statistical data analysis. One starts with new competing hypotheses about a system of interest. These are then embodied into a set of candidate models that are to be compared with each other given empirical data. One then designs an experiment that is maximally discriminative with respect to the candidate models. Data acquisition and analysis then proceed, the conclusion of which serves to generate a new set of competing hypotheses, etc… Adapted from .
Figure 2
Figure 2. The mean-field/Laplace approximation.
The variational Bayesian approach furnishes an approximation to the marginal posterior densities of subsets of unknown model parameters formula image. Here, the 2D landscape depicts a (true) joint posterior density formula image and the two black lines are the subsequent marginal posterior densities of formula image and formula image, respectively. The mean-field approximation basically describes the joint posterior density as the product of the two marginal densities (black profiles). In turn, stochastic dependencies between parameter subsets are replaced by deterministic dependencies between their posterior sufficient statistics. The Laplace approximation further assumes that the marginal densities can be described by Gaussian densities (red profiles).
Figure 3
Figure 3. Selection error rate and the Laplace-Chernoff risk.
The (univariate) prior predictive density of two generative models formula image (blue) and formula image (green) are plotted as a function of data y, given an arbitrary design u. The dashed grey line shows the marginal predictive density formula image that captures the probabilistic prediction of the whole comparison set formula image. The area under the curve (red) measures the model selection error rate formula image, which depends upon the discriminability between the two prior predictive density formula image and formula image. This is precisely what the Laplace-Chernoff risk formula image is a measure of. Adapted from .
Figure 4
Figure 4. Comparison of asymmetric utility and asymmetric learning rate.
This figure summarizes the analysis of choice and value data using models that assume asymmetric utility, asymmetric learning rate, both asymmetries or none. Upper left: Trial-by-trial feedback history (either negative, neutral or positive). Grey neutral feedbacks correspond to ‘no-go’ choices. Upper right: Trial-by-trial dynamics of true value (red), measured value (black) and agent's binary go(1)/no-go(0) choices (black dots). Middle-left: posterior probability of the four models given simulated choice data. Middle-right: same format, given value data. Lower left: family posterior probabilities for both model spaces partitions, given choice data (left: family ‘yes’ = {‘utility’, ‘both’} vs family ‘no’ = {‘learning’, ‘none’}, right: family ‘yes’ = {‘learning’, ‘both’} vs family ‘no’ = {‘utility’, none’}. Lower left: same format, given value data.
Figure 5
Figure 5. Online design optimization for DCM comparison.
This figure summarizes the simulation of online design optimization, in the aim of best discriminating between two brain network models (formula image and formula image) given fMRI data time series. In this case, the problem reduces to deciding whether or not to introduce the second experimental factor (here, formula image = attentional modulation), on top of the first factor (formula image = photic stimulation). Upper left: the two network models to be compared given fMRI data (top/bottom: with/without attentional modulation of the V1→V5 connection). Upper middle: block-by-block temporal dynamics of design efficiency of both types of blocks. Green (resp. blue) dots correspond to blocks with (resp. without.) attentional modulation. Upper right: scan-by-scan temporal dynamics of the optimized (on-line) design. Lower left: scan-by-scan temporal dynamics of the simulated fMRI signal (blue: V1, green: V5). Lower middle: block-by-block temporal dynamics of 95% posterior confidence intervals on the estimated modulatory effect (under model formula image). The green line depicts the strength of the simulated effect. Lower right: block-by-block temporal dynamics of log Bayes factors formula image.
Figure 6
Figure 6. Comparison of deterministic and stochastic dynamical systems.
This figure summarizes the VB comparison of deterministic (upper row) and stochastic (lower row) variants of a Lorenz dynamical system, given data simulated under the stochastic variant of the model. Upper left: fitted data (x-axis) is plotted against simulated data (y-axis), for the deterministic case. Perfect model fit would align all points on the red line. Lower left: same format, for the stochastic case. Upper middle: 95% posterior confidence intervals on hidden-states dynamics. Recall that for deterministic systems, uncertainty in the hidden states arises from evolution parameters' uncertainty. Lower middle: same format, stochastic system. Upper right: residuals' empirical autocorrelation (y-axis) as a function of temporal lag (x-axis), for the deterministic system. Lower right: same format, stochastic system.
Figure 7
Figure 7. Comparison of delayed and non-delayed dynamical systems.
This figure summarizes the VB comparison of non-delayed (upper row) and delayed (lower row) variants of a linear deterministic dynamical system, given data simulated under the delayed variant of the model. This figure uses the same format as Figure 6.
Figure 8
Figure 8. Comparison of white and auto-correlated state-noise.
This figure summarizes the VB comparison of stochastic systems driven with either white (upper row) or auto-correlated (lower row) state noise. This figure uses the same format as Figure 6.
Figure 9
Figure 9. Effect of the micro-time resolution.
This figure summarizes the effect of relying on either a slow (upper row) or fast (lower row) micro-time resolution, when inverting nonlinear dynamical systems. Left: same format as Figure 6. Upper middle: estimated hidden-states dynamics at low micro-time resolution (data samples are depicted using dots). Lower middle: same format, fast micro-time resolution. Upper right: parameters' posterior correlation matrix, at low micro-time resolution. Lower middle: same format, fast micro-time resolution.
Figure 10
Figure 10. Binary data classification.
This figure exemplifies a classification analysis, which is used to infer on the link between a continuous variable X and a binary data y. The analysis is conducted on data simulated under either a null model (H0: no link) or a sigmoid mapping (H1). Upper left: the classification accuracy, in terms of the Monte-Carlo average probability of correct prediction under both types of data (left: H1, right: H0), for the training dataset. The green dots show the expected classification accuracy, using the true values of each model's set of parameters. The dotted red line depicts chance level. Upper right: same format, test dataset (no model fitting). Lower left: same format, for the log Bayes factor formula image, given the training dataset. Lower right: same format, given the full (train+test) dataset.
Figure 11
Figure 11. Random-effect analysis.
This figure exemplifies a random-effect GLM analysis, which is used to infer on the group mean of an effect of interest. The analysis is conducted on data simulated under either a null model (H0: group mean is zero) or a non-zero RFX model (H1). Left: Monte-Carlo average of the VB-estimated group mean under H1, given both types of data (left: H1, right: H0). Left: same format, for the log Bayes factor formula image.
Figure 12
Figure 12. Random-effect group-BMS.
This figure exemplifies a random-effect group-BMS analysis, which is used to infer on the best model at the group level. The analysis is conducted on two groups of 32 subjects, whose data were simulated under either a ‘full’ (formula image, group 1) or a ‘reduced’ (formula image, group 2) model. Upper left: simulated data (y-axis) plotted against fitted data (x-axis), for a typical simulation. Lower left: histograms of log Bayes factor formula image, for both groups (red: group 1, blue: group 2). Upper middle: model attributions, for group 1. The posterior probability formula image for each subject is coded on a black-and-white colour scale (black = 1, white = 0). Lower middle: same format, group 2. Upper right: exceedance probabilities, for group 1. The red line indicates the usual 95% threshold. Lower right: same format, group 2.
Figure 13
Figure 13. Improving Q-learning models with inversion diagnostics.
This figure demonstrates the added-value of Volterra decompositions, when deriving learning models with changing learning rates. Upper left: simulated belief (blue/red: outcome probability for the first/second action, green/magenta: volatility of the outcome contingency for the first/second action) of the Bayesian volatile learner (y-axis) plotted against trials (x-axis). Lower left: estimated hidden states of the deterministic variant of the dynamic learning rate model (blue/green: first/second action value, red: learning rate). This model corresponds to the standard Q-learning model (the learning rate is constant over time). Upper middle: estimated hidden states of the stochastic variant of the dynamic learning rate model (same format). Note the wide posterior uncertainty around the learning rate estimates. Lower middle: Volterra decomposition of the stochastic learning rate (blue: agent's chosen action, green: winning action, red: winning action instability). Upper right: estimated hidden states of the augmented Q-learning model (same format as before). Lower right: Volterra decomposition of the augmented Q-learning model's learning rate (same format as before).

Similar articles

Cited by

References

    1. Stephan KE, Friston KJ, Frith CD (2009) Dysconnection in Schizophrenia: From Abnormal Synaptic Plasticity to Failures of Self-monitoring. Schizophrenia Bull 35(3): 509–27. - PMC - PubMed
    1. Schmidt A, Smieskova R, Aston J, Simon A, Allen P, et al.. (2013) Brain connectivity abnormalities predating the onset of psychosis: correlation with the effect of medication. JAMA Psychiatry 70(9): 903–12. - PubMed
    1. Schofield T, Penny W, Stephan KE, Crinion J, Thompson AJ, et al.. (2012) Changes in auditory feedback connections determine the severity of speech processing deficits after stroke. J Neurosci 32: 4260–4270. - PMC - PubMed
    1. Moran R, Symmonds M, Stephan K, Friston K, Dolan R (2011) An in vivo assay of synaptic function mediating human cognition. Curr Biol 21: 1320–1325. - PMC - PubMed
    1. Daunizeau J, David O, Stephan KE (2011) Dynamic causal modeling: a critical review of the biophysical and statistical foundations. NeuroImage 58(2): 312–22. - PubMed

Publication types