Canonical neural networks perform active inference
- PMID: 35031656
- PMCID: PMC8760273
- DOI: 10.1038/s42003-021-02994-2
Canonical neural networks perform active inference
Abstract
This work considers a class of canonical neural networks comprising rate coding models, wherein neural activity and plasticity minimise a common cost function-and plasticity is modulated with a certain delay. We show that such neural networks implicitly perform active inference and learning to minimise the risk associated with future outcomes. Mathematical analyses demonstrate that this biological optimisation can be cast as maximisation of model evidence, or equivalently minimisation of variational free energy, under the well-known form of a partially observed Markov decision process model. This equivalence indicates that the delayed modulation of Hebbian plasticity-accompanied with adaptation of firing thresholds-is a sufficient neuronal substrate to attain Bayes optimal inference and control. We corroborated this proposition using numerical analyses of maze tasks. This theory offers a universal characterisation of canonical neural networks in terms of Bayesian belief updating and provides insight into the neuronal mechanisms underlying planning and adaptive behavioural control.
© 2022. The Author(s).
Conflict of interest statement
The authors declare no competing interests.
Figures





References
-
- Linsker R. Self-organization in a perceptual network. Computer. 1988;21:105–117.
-
- Dayan P, Hinton GE, Neal RM, Zemel RS. The Helmholtz machine. Neural Comput. 1995;7:889–904. - PubMed
-
- Sutton, R. S. & Barto, A. G. Reinforcement Learning (MIT Press, 1998).
-
- Bishop, C. M. Pattern Recognition and Machine Learning (Springer, 2006).
-
- Friston KJ, Kilner J, Harrison L. A free energy principle for the brain. J. Physiol. Paris. 2006;100:70–87. - PubMed
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
Miscellaneous