A primer on entropy in neuroscience
- PMID: 36736445
- DOI: 10.1016/j.neubiorev.2023.105070
A primer on entropy in neuroscience
Abstract
Entropy is not just a property of a system - it is a property of a system and an observer. Specifically, entropy is a measure of the amount of hidden information in a system that arises due to an observer's limitations. Here we provide an account of entropy from first principles in statistical mechanics with the aid of toy models of neural systems. Specifically, we describe the distinction between micro and macrostates in the context of simplified binary-state neurons and the characteristics of entropy required to capture an associated measure of hidden information. We discuss the origin of the mathematical form of entropy via the indistinguishable re-arrangements of discrete-state neurons and show the way in which the arguments are extended into a phase space description for continuous large-scale neural systems. Finally, we show the ways in which limitations in neuroimaging resolution, as represented by coarse graining operations in phase space, lead to an increase in entropy in time as per the second law of thermodynamics. It is our hope that this primer will support the increasing number of studies that use entropy as a way of characterising neuroimaging timeseries and of making inferences about brain states.
Keywords: Entropy; Information theory; Neuroscience; Statistical mechanics.
Copyright © 2023 Elsevier Ltd. All rights reserved.
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources