Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Dec 15;109(24):4001-4017.e10.
doi: 10.1016/j.neuron.2021.09.044. Epub 2021 Oct 28.

A synaptic learning rule for exploiting nonlinear dendritic computation

Affiliations

A synaptic learning rule for exploiting nonlinear dendritic computation

Brendan A Bicknell et al. Neuron. .

Abstract

Information processing in the brain depends on the integration of synaptic input distributed throughout neuronal dendrites. Dendritic integration is a hierarchical process, proposed to be equivalent to integration by a multilayer network, potentially endowing single neurons with substantial computational power. However, whether neurons can learn to harness dendritic properties to realize this potential is unknown. Here, we develop a learning rule from dendritic cable theory and use it to investigate the processing capacity of a detailed pyramidal neuron model. We show that computations using spatial or temporal features of synaptic input patterns can be learned, and even synergistically combined, to solve a canonical nonlinear feature-binding problem. The voltage dependence of the learning rule drives coactive synapses to engage dendritic nonlinearities, whereas spike-timing dependence shapes the time course of subthreshold potentials. Dendritic input-output relationships can therefore be flexibly tuned through synaptic plasticity, allowing optimal implementation of nonlinear functions by single neurons.

Keywords: NMDA receptors; biophysical model; cable theory; dendritic computation; feature-binding problem; learning rule; morphology; pyramidal neuron; synaptic plasticity.

PubMed Disclaimer

Conflict of interest statement

Declaration of interests The authors declare no competing interests.

Figures

None
Graphical abstract
Figure 1
Figure 1
Synaptic integration depends on dendritic morphology and local nonlinearity Layer 2/3 pyramidal cell morphology with distinct basal and apical dendritic domains. Plots show the simulated peak somatic response to increasing numbers of excitatory synaptic inputs at the indicated locations, compared with the peak of the linear sum of the same number of unitary EPSPs. The basal and apical inputs are located at path distances of 95 and 275 μm from the soma, respectively. Voltage-dependent NMDA receptors yield supralinear integration within dendritic branches (active model, red lines), whereas integration in a purely passive model is sublinear (passive model, blue lines). Integration is approximately linear when the synapses of the active model are relocated to the soma (point neuron model, black line).
Figure 2
Figure 2
Two local variables determine the impact of synaptic plasticity on somatic output (A) Example simulation of the active model stimulated with Poisson input into excitatory (black) and inhibitory (magenta) synapses. Bottom left: somatic voltage trace. Top left: raster plots of synaptic input preceding two somatic spikes. Synapses located on the same dendrite are grouped together on the y axis. Markers are scaled by the magnitude of influence on the somatic voltage, vsomaw, immediately prior to the spike, normalized by the maximum within excitatory and inhibitory groups. In this example, the variational equations were solved numerically for each individual synaptic activation by making dummy copies of synapses that were active more than once. Right: spatial distribution of activated synapses from example (ii). (B) Polynomial fits of somatic spike-triggered average vsomaw in the active model, to be used as plasticity kernels in the learning algorithm. (C) The approximations in (B) accurately predict the voltage gradients computed from numerical integration of (Equation 13), (Equation 14), (Equation 15), (Equation 16), (Equation 17) (fitted on 75% of the simulated data and tested on the remaining 25%). For visibility, scatterplot shows randomly sampled points from bins of 0.1 mV nS−1 width along the x axis (up to 100 points per bin). R2 values are computed from the correlation between actual and approximated values over all held-out data. (D) The voltage at a synapse at the time of somatic spikes depends on multiple factors, allowing their implicit representation in the learning rule. Shown is the semipartial correlation computed from a linear model fitted on 75% of the data and tested on the remaining 25%. See also Figure S1.
Figure 3
Figure 3
A single neuron can learn nonlinear functions (A) Nonlinear feature-binding problem. Synapses representing different stimulus features were randomly distributed throughout basal and apical dendrites. In this example, the neuron should only spike in response to the associations “green triangle” and “orange square” as indicated by the classification labels (bottom). (B) Example simulations of a model before (gray) and after (black) training on the task defined in (A). Each combination of features is presented in turn via rate-coded Poisson input, interspersed with background noise. For clarity, only input to excitatory synapses is shown. (C) Performance (fraction correct) of models trained on ten random instantiations of the task (left bars). In the somatic inhibition condition (middle bars), models were trained with all inhibitory synapses placed at the soma. Performance collapsed when dendritic voltage dependence was omitted from the learning rule (right bars). (D) Classification of associations is made by differential supralinear or sublinear integration. Input was presented to the indicated domains of trained models with somatic spiking blocked. The peak somatic depolarization measured when features were presented together was compared with the sum of responses when presented independently (averaged over 20 presentations of each association pair, then over label types). All bars denote means; p values are from two-tailed Wilcoxon signed-rank tests between groups for n = 10 independent replications. See also Figures S2 and S3.
Figure 4
Figure 4
Learning tunes the spatial distribution of input strengths (A) Schematic illustrating how the feature-binding task can be solved via the structured connectivity mechanism proposed by Cazé et al. (2013). Arrows denote the targeting of excitatory input from a given feature to a compartment. With supralinear integration, responses to clustered input are enhanced relative to responses to dispersed input, and conversely with sublinear integration. (B) When connectivity is random, the strategies of (A) can be realized functionally by tuning synaptic weights through learning. Left: spatial distribution of excitatory input strengths (weight × input rate) in a trained model. Inputs are color-coded by the features they represent and the classification labels defined by the matrix below. Right: profiles of excitatory input strength for the model depicted on the left. The height of each point is proportional to the sum of weighted input rates on a branch. Preferred associations (e.g., X1 and Y1; blue and green) have strong inputs to common basal dendrites but separate apical dendrites. Conversely, strong inputs of nonpreferred associations (e.g., X1 and Y2; blue and orange) are dispersed in basal dendrites and clustered in apical dendrites. (C) Left: functionally clustered or dispersed configurations are reflected in the spatial correlation between weighted input to dendritic branches. XE and YE represent excitatory input from two features. Right: correlation between spatial profiles of excitation (weighted input rates, summed within branches) from association pairs after learning. (D) As in (C) but comparing excitation and inhibition. In this case, the excitatory and inhibitory contributions from both features are summed before computing the correlation. In basal dendrites, spatially selective inhibition serves to suppress the response to (−) patterns. All bars denote means; p values are from two-tailed Wilcoxon signed-rank tests between groups for n = 10 independent replications.
Figure 5
Figure 5
Sparse, precisely timed bursts of input maximize classification performance (A) Left: parameterization of candidate rate and temporal coding schemes by the time-averaged input rate to active synapses and the number of precisely timed elevations of the input rate (implemented as Gaussian bumps). The total presynaptic population rate is constrained to be the same for all parameters. To enforce this constraint, and with scaling to maintain physiological instantaneous input rates, patterns also differ in sparseness (fraction of active synapses) and temporal precision (width of rate elevations). Note that we use a decreasing order for the x axis in the temporal region; having multiple precisely timed events per synapse more closely resembles a rate code than a single event per synapse as the input spikes are more uniformly distributed in time. Right: example rate functions generated for an association pair in the feature-binding task by different parameter choices. (i) sparse rate code, (ii) dense temporal code, (iii) and (iv) mixed regimes comprising bursts of activity. For clarity, only 20 synapses are shown. (B) Example 7 × 7 matrix of associations to be classified. Classification labels are randomly assigned for each replication. (C) Performance (fraction correct) of trained active, passive, and point neuron models, averaged over 10 replications for each input condition. The optimal form of input (asterisk) was the same for all models. R and T denote the rate and temporal schemes used for comparison in (D) and (E). (D) Example realizations of Poisson input to a synapse for the rate, optimal, and temporal conditions. Bursts in the optimal condition are temporally localized but do not suffer from the transmission failures of the temporal condition. (E) Detailed comparison of performance across the three models from an independent set of simulations. Bars denote means; p values are from two-tailed Wilcoxon signed-rank tests between groups for n = 10 independent replications.
Figure 6
Figure 6
Synaptic plasticity can shape subthreshold potentials to implement a temporal feature-binding strategy (A) Example simulation of a model trained on the 7 × 7 association task in the optimal precisely timed burst input condition. The response on a nonlinear 2 × 2 subset of the task is shown, as defined by the classification labels (left). Each combination of features is presented in turn, interspersed with background noise. The markers in the raster are scaled in proportion to synaptic weight. For clarity, only input to excitatory synapses is shown. (B) Example of average subthreshold somatic membrane potentials arising from presentation of input features in isolation, with somatic spiking blocked. Shaded area is SD from 20 presentations. The components Xi and Yi correspond to those simulated in (A). Note that X1 and Y1 will sum constructively to produce a spike at ∼200 ms, but X1 and Y2 will not. (C) Across all models, after training the subthreshold potentials arising from pairs forming preferred associations are temporally correlated, whereas those arising from pairs forming nonpreferred associations are anticorrelated. (D) Left: analogous to the spatial clustering strategy of Figure 4, over learning, synaptic weights evolve to temporally align patterns of excitation to bind preferred associations. XE and YE represent excitatory input from two features. Right: correlations between temporal profiles of excitation (weighted input rates, summed over synapses) from pairs forming preferred or nonpreferred associations. (E) As in (D) but for the alignment of excitation and inhibition. XI and YI represent inhibitory input from two features. Excitation-inhibition correlations are calculated after summing the excitatory and inhibitory contributions of each feature component in a pair. Weighted excitatory and inhibitory input is aligned on nonpreferred associations, serving to suppress somatic output. Bars denote means (averaged over 20 presentations of each association, then over label types); p values are from two-tailed Wilcoxon signed-rank tests between groups for n = 10 independent replications.
Figure 7
Figure 7
Synergistic recruitment of spatial and temporal processing (A) Schematic of presynaptic rates underlying the precisely timed burst input scheme with compressed stimulus presentation time. (B) Average model performance after training as a function of stimulus duration for ten independent replications per condition. Dashed lines are average performance under a rate code of the same sparseness and time-averaged maximum rate (20 Hz), presented for 400 ms. Note that the x axis is not a linear scale. Shaded areas are SEM. (C) Analysis of the relative contribution of spatial and temporal processing as a function of stimulus duration. The signature of each strategy is imprinted on the synaptic weights through learning, allowing the classification label of a given association pair to be predicted from knowledge of the weights and input rates. Plots show the prediction accuracy of logistic regression models fitted to predict the classification labels on the basis of correlations between spatial (ρS, squares), temporal (ρT, circles), and spatiotemporal (ρST, diamonds) input profiles after training for the active (red) and passive (blue) models. Spatiotemporal correlations in the active model are more predictive of class labels than spatial correlations alone, implying a local organization of temporal signals within individual branches. (D) Schematic of spatiotemporal feature-binding strategies. Traces represent the excitation of dendritic branches with weighted input from two stimulus features. With supralinear integration, the response to preferred associations (+) can be synergistically enhanced by tuning weights such that excitation is both clustered and synchronous (denoted by red arrow). Input from nonpreferred associations (−) should instead be dispersed and asynchronous. Feature binding with sublinear integration demands dispersed, synchronous input from preferred associations and clustered, asynchronous input to suppress responses to nonpreferred associations. Although less compatible than supralinear processing, local sublinear integration could compensate on branches where temporal segregation is incomplete (blue arrow). Inhibitory input can also act in both cases to sharpen temporal responses and aid suppression of (−) pairs (not shown). See also Figure S6.

References

    1. Abrahamsson T., Cathala L., Matsui K., Shigemoto R., Digregorio D.A. Thin dendrites of cerebellar interneurons confer sublinear synaptic integration and a gradient of short-term plasticity. Neuron. 2012;73:1159–1172. - PubMed
    1. Allen Institute for Brain Science . 2015. Allen Cell Types Database.http://celltypes.brain-map.org
    1. Angelo K., London M., Christensen S.R., Häusser M. Local and global effects of Ih distribution in dendrites of mammalian neurons. J. Neurosci. 2007;27:8643–8653. - PMC - PubMed
    1. Archie K.A., Mel B.W. A model for intradendritic computation of binocular disparity. Nat. Neurosci. 2000;3:54–63. - PubMed
    1. Ariav G., Polsky A., Schiller J. Submillisecond precision of the input-output transformation function mediated by fast sodium dendritic spikes in basal dendrites of CA1 pyramidal neurons. J. Neurosci. 2003;23:7750–7758. - PMC - PubMed

Publication types

LinkOut - more resources