Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2012 Aug;22(4):653-9.
doi: 10.1016/j.conb.2012.06.005. Epub 2012 Jul 12.

Information theoretic approaches to understanding circuit function

Affiliations
Review

Information theoretic approaches to understanding circuit function

Adrienne Fairhall et al. Curr Opin Neurobiol. 2012 Aug.

Abstract

The analysis of stimulus/response patterns using information theoretic approaches requires the full probability distribution of stimuli and response. Recent progress in using information-based tools to understand circuit function has advanced understanding of neural coding at the single cell and population level. In advances over traditional reverse correlation approaches, the determination of receptive fields using information as a metric has allowed novel insights into stimulus representation and transformation. The application of maximum entropy methods to population codes has opened a rich exploration of the internal structure of these codes, revealing stimulus-driven functional connectivity. We speculate about the prospects and limitations of information as a general tool for dissecting neural circuits and relating their structure and function.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Information encoded by the neural response can be quantified by the mutual information between response and stimulus I(s, r). A. The response distribution is given by P(r) = ∫ P(si) P(r|si). When specifying a value of s, e.g. si, significantly reduces uncertainty, or narrows the distribution, about r, the mutual information between r and s is large (left). The more precisely that s specifies r, the larger the information (right). B. Information and correlations. Here, two stimuli sA and sB generate binary responses {r1,r2} with identical marginal distributions (p(r1|sA)=p(r1|sB) and p(r2|sA)=p(r2|sB)), yet differing joint responses (p({r1,r2}|sA) /= p({r1,r2}|sB)). If correlations are ignored, I(s, r) = 0; if correlations are maintained and the stimuli are equally likely, then I(s, r) ∼ 0.23 bits. From left to right: neural response to sA, with p(r1=1|sA)=p(r2=1|sA)=0.4 but response covariance p(r1r2=1|sA)-p(r1=1|sA)p(r2=1|sA)=0.14; neural response to sB, with p(r1=1|sB)=p(r2=1|sB)=0.4 but response covariance p(r1r2=1|sB)-p(r1=1|sB)p(r2=1|sB)=-0.12; the neural response neglecting covariances is identical for each stimulus, carrying no information.
Figure 2
Figure 2. Finding stimulus dimensions with information maximization
A. The MID method searches for a direction in stimulus space, ie. a filter f. The stimulus s is high-dimensional and is described by some distribution, the grey cloud P(s). One then takes the component of s along a direction f. B. This projected stimulus, fs, has a (one-dimensional) distribution P(fs). This is known as the prior distribution. The spike-triggered stimuli (denoted in orange), have distribution P(fs| spike). The goal is to determine f such that the Kullback-Leibler distance (see Box) between these two distributions is maximized; here, direction f 1 improves the separation of the distributions compared with f2. Finding the direction f that maximizes the Kullback-Leibler distance is equivalent to maximizing the mutual information between the projected stimulus and the occurrence of a spike. C. For a model cell with the filter shown on the left and a sigmoidal nonlinearity, driven with natural images, the MID method recovers the true filter considerably better than the spike-triggered average (with thanks to T. Sharpee).
Figure 3
Figure 3
A schematic of the dichotomous Gaussian model of population activity (simplified version). (A) N cells receive a common Gaussian input, as well as independent inputs; the summed input is compared to a threshold to either generate a spike or silence at each timestep. (B) Intriguingly, the output of this idealized model of the thresholding mechanism of spike generation produces highly different population statistics than the corresponding pairwise maximum entropy model [45,46]. The red and blue lines show the fraction out of N=50 model cells that spike simultaneously in a given timestep; both models are fixed to have the same firing rate and pairwise correlation [45].

Similar articles

Cited by

References

    1. Cover TM, Thomas JA. Information theory. New York: John Wiley & Sons, Inc; 1991.
    1. Nemenman I, Bialek W, de Ruyter van Steveninck R. Entropy and information in neural spike trains: progress on the sampling problem. Physical Review E Statistical, Nonlinear, and Soft Matter Physics. 2004;69:56111. - PubMed
    1. Ince RA, Senatore R, Arabzadeh E, Montani F, Diamond ME, Panzeri S. Information-theoretic methods for studying population codes. Neural Netw. 2010;23(6):713–27. This is an extensive review of recent literature applying information to characterize neural systems, particularly helpful for its discussion of multiple recent approaches to finite size corrections. - PubMed
    1. Quian Quiroga R, Panzeri S. Extracting information from neural populations: information theory and decoding approaches. Nature Reviews Neuroscience. 2009:10, 173–185. This pedagogical review of information theoretic approaches focuses on the relationship between information and decoding methods. - PubMed
    1. Reinagel P, Reid RC. Temporal coding of visual information in the thalamus. J Neurosci. 2000;20(14):5392–400. - PMC - PubMed

Publication types