Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 Feb;76(Pt B):198-211.
doi: 10.1016/j.jmp.2015.11.003.

A tutorial on the free-energy framework for modelling perception and learning

Affiliations

A tutorial on the free-energy framework for modelling perception and learning

Rafal Bogacz. J Math Psychol. 2017 Feb.

Abstract

This paper provides an easy to follow tutorial on the free-energy framework for modelling perception developed by Friston, which extends the predictive coding model of Rao and Ballard. These models assume that the sensory cortex infers the most likely values of attributes or features of sensory stimuli from the noisy inputs encoding the stimuli. Remarkably, these models describe how this inference could be implemented in a network of very simple computational elements, suggesting that this inference could be performed by biological networks of neurons. Furthermore, learning about the parameters describing the features and their uncertainty is implemented in these models by simple rules of synaptic plasticity based on Hebbian learning. This tutorial introduces the free-energy framework using very simple examples, and provides step-by-step derivations of the model. It also discusses in more detail how the model could be implemented in biological neural circuits. In particular, it presents an extended version of the model in which the neurons only sum their inputs, and synaptic plasticity only depends on activity of pre-synaptic and post-synaptic neurons.

PubMed Disclaimer

Figures

Fig. 10
Fig. 10
The architectures of the original model performing simple perceptual inference. Notation as in Fig. 3.
Fig. 1
Fig. 1
The posterior probability of the size of the food item in the problem given in Exercise 1.
Fig. 2
Fig. 2
Solutions to Exercise 2, Exercise 3. In panel b we have also included quantities that we will see later can be regarded as prediction errors.
Fig. 3
Fig. 3
The architecture of the model performing simple perceptual inference. Circles denote neural “nodes”, arrows denote excitatory connections, while lines ended with circles denote inhibitory connections. Labels above the connections encode their strength, and lack of label indicates the strength of 1. Rectangles indicate the values that need to be transmitted via the connections they label.
Fig. 4
Fig. 4
Architectures of models with linear and nonlinear function g. Circles and hexagons denote linear and nonlinear nodes respectively. Filled arrows and lines ended with circles denote excitatory and inhibitory connections respectively, and an open arrow denotes a modulatory influence.
Fig. 5
Fig. 5
The architecture of the model inferring 2 features from 2 sensory stimuli. Notation as in Fig. 4(b). To help identify which connections are intrinsic and extrinsic to each level of hierarchy, the nodes and their projections in each level of hierarchy are shown in green, blue and purple respectively (in the online version). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Fig. 6
Fig. 6
(a) The architecture of the model including multiple layers. For simplicity only the first two layers are shown. Notation as in Fig. 5. (b) Extrinsic connectivity of cortical layers.
Fig. 7
Fig. 7
Prediction error networks that can learn the uncertainty parameter with local plasticity. Notation as in Fig. 4(b). (a) Single node. (b) Multiple nodes for multidimensional features.
Fig. 8
Fig. 8
Changes in estimated variance during learning in Exercise 5.
Fig. 9
Fig. 9
An example of a texture.
None
None
None
None

Similar articles

Cited by

References

    1. Bastos Andre M., Usrey W. Martin, Adams Rick A., Mangun George R., Fries Pascal, Friston Karl J. Canonical microcircuits for predictive coding. Neuron. 2012;76:695–711. - PMC - PubMed
    1. Bell Anthony J., Sejnowski Terrence J. An information-maximization approach to blind separation and blind deconvolution. Neural Computation. 1995;7:1129–1159. - PubMed
    1. Bell Anthony J., Sejnowski Terrence J. The independent components of natural scenes are edge filters. Vision Research. 1997;37:3327–3338. - PMC - PubMed
    1. Bogacz Rafal, Brown Malcolm W., Giraud-Carrier Christophe. Emergence of movement sensitive neurons’ properties by learning a sparse code for natural moving images. Advances in Neural Information Processing Systems. 2001;13:838–844.
    1. Bogacz Rafal, Gurney Kevin. The basal ganglia and cortex implement optimal decision making between alternative actions. Neural Computation. 2007;19:442–477. - PubMed

LinkOut - more resources