Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Nov 4;20(11):e1012531.
doi: 10.1371/journal.pcbi.1012531. eCollection 2024 Nov.

Synapses learn to utilize stochastic pre-synaptic release for the prediction of postsynaptic dynamics

Affiliations

Synapses learn to utilize stochastic pre-synaptic release for the prediction of postsynaptic dynamics

David Kappel et al. PLoS Comput Biol. .

Abstract

Synapses in the brain are highly noisy, which leads to a large trial-by-trial variability. Given how costly synapses are in terms of energy consumption these high levels of noise are surprising. Here we propose that synapses use noise to represent uncertainties about the somatic activity of the postsynaptic neuron. To show this, we developed a mathematical framework, in which the synapse as a whole interacts with the soma of the postsynaptic neuron in a similar way to an agent that is situated and behaves in an uncertain, dynamic environment. This framework suggests that synapses use an implicit internal model of the somatic membrane dynamics that is being updated by a synaptic learning rule, which resembles experimentally well-established LTP/LTD mechanisms. In addition, this approach entails that a synapse utilizes its inherently noisy synaptic release to also encode its uncertainty about the state of the somatic potential. Although each synapse strives for predicting the somatic dynamics of its postsynaptic neuron, we show that the emergent dynamics of many synapses in a neuronal network resolve different learning problems such as pattern classification or closed-loop control in a dynamic environment. Hereby, synapses coordinate themselves to represent and utilize uncertainties on the network level in behaviorally ambiguous situations.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Predictive processing for individual synapses.
A: i) An agent interacting with the environment through actions, which are determined by the agent’s internal state. Sensory feedback from the environment to the agent is used to update the agent’s internal model of the environment. ii) An additional, external trigger can be included into the framework from i) that determines when actions are initialized. iii) The framework shown in ii) can be transferred to a synapse that interacts with its postsynaptic soma. Relevant variables are the synaptic efficacy (internal state), the postsynaptic current (action), the somatic membrane potential (environmental state) of the postsynaptic neuron (environment), and the back-propagating action potential (feedback). B: A single trajectory of the somatic membrane potential u(t) between two action potentials. C: The internal synaptic model of the somatic membrane potential can be characterized by the stochastic bridge model, providing the probability distribution p(u | z) about the value of u(t) between two postsynaptic spikes. The solid blue line shows the mean, std indicated by shaded area. D: Illustration of relevant dynamics. Pre-synaptic input spikes (red) trigger synapses to release stochastic postsynaptic currents y (light green) with a mean and variance of pulse scales dependent on the synaptic efficacy w. Theoretical Dirac delta pulses are illustrated here as scaled unit-sized pulses. Postsynaptic spike timings reach the synapse through bAPs z, constraining the internal model of the somatic membrane potential to the firing threshold ϑ and then reset to ur immediately after every bAP (see panel C). Between bAPs, the internal model estimates the probability density of the membrane potential according to the stochastic process (μ(t), σ(t)).
Fig 2
Fig 2. The SPP learning rule resembles regulated triplet STDP.
A: Illustration of the main steps of the SPP synaptic learning model. B,C: The triplet STDP windows WLTP (B) and WLTD (C) that emerge from SPP as a function of the spike timing differences Δt1 and Δt2. D: The effective synaptic efficacy changes that result from the LTP and LTD windows. E: Mean synaptic efficacy changes (gray line) and individual trials (black dots) for an STDP pairing protocol. Shaded area indicates std over trials (Δt2 = 500 ms). F: synaptic efficacy changes as a function of pre- and post- rate. G: Weight dependence of the SPP learning rule plotted as STDP curve as in (E).
Fig 3
Fig 3. Synapse-level probability matching.
A: μ(t) and σ(t) of the somatic membrane potential given the stochastic bridge model for a neuron that is brought to fire with a spike interval of 300 ms. Pre-synaptic neurons were brought to fire at fixed time offsets relative to the post-synaptic spikes. B: Synapses learn to inject the optimal current that matches the bridge model in (A). Empty circles indicate the theoretically optimal currents y*. Individual current pulses are shown for multiple trials for synapses with different time offsets. C: The combined effect of all synapses shown by the summed input current for a single trial. D: Synaptic efficacies after learning and weight means and stds predicted by the theory. E: Synaptic efficacies after learning are correlated with the theoretically derived μ* and σ* (see panel D). F: The mean prediction error over all synapses declines throughout learning. G: Firing behavior of the neuron after learning when allowed to fire freely in response to input spikes. 10 individual spike times are shown together with histograms over 1000 trials. Insets show membrane dynamics during the 10 trial runs. The orange arrow indicates spike time during learning. H: As in (G) but here the output spike times were given by Gaussian distributions of different spreads. The orange arrow indicates here the mean.
Fig 4
Fig 4. SPP learning rule for supervised and unsupervised learning scenarios.
A: Illustration of the network structure with synapses shown in green being adapted by SPP. Five independent spike patterns (▫, ☆, △, ◇, ○) are presented to the network via the input neurons. Output neurons are either clamped to pattern-specific activity during learning (supervised) or allowed to run freely (unsupervised). B: Learning result using the SPP rule for the supervised scenario. Typical spiking activity of the network after learning for 60 s. Black ticks show output spike times. C: Output activity after learning for the unsupervised scenario. Traces of membrane potentials are shown for selected output neurons (matching color-coded arrows indicate neuron identities). D: Classification performance for supervised and unsupervised learning scenario. Classification performance plateaus at near optimal value after about 20 s of learning time for both supervised and unsupervised scenario. E: Spike patterns of two input symbols (▫, ☆,) were mixed with different mixing rates (example pattern shows mixing rate 1/2). Uncertainty is reflected in output decoding (left) if inputs are ambiguous (around mixing rate of 1/2). If synapse noise is disabled (r0 = 1), uncertainty is not represented in the output (right). Mean and std over classification scores of 5 input pattern presentations are shown. F: As in (E) but for different levels of release probability r0. G: Classification performance for different number of input neurons. Error bars show mean and std.
Fig 5
Fig 5. The SPP for learning a closed-loop behavior in a recurrent network.
A: Illustration of the behavior level PP for an agent that interacts with a dynamic environment. B: A spiking neural network interacting with an environment using SPP to learn a control policy. The activity of action neurons controls the movement of an agent in a 3-dimensional environment. Feedback about the position of the agent is provided through feedback neurons. The policy to navigate the agent is learned through SPP between feedback and action neurons. The training trajectory (dark blue) and 8 spontaneous movement trajectories generated by the network after learning (light blue) are shown. Action neuron preferred directions indicated in green (not to scale). C: Spike train generated spontaneously by the network after learning corresponding to one movement trajectory in (B). D: Learning performance (MSE) for different release probability parameters r0. Bars indicate mean and std over five independent runs.

References

    1. Katz B. Quantal mechanism of neural transmitter release. Science. 1971;173(3992):123–6. doi: 10.1126/science.173.3992.123 - DOI - PubMed
    1. Oertner TG, Sabatini BL, Nimchinsky EA, Svoboda K. Facilitation at single synapses probed with optical quantal analysis. Nature neuroscience. 2002;5(7):657–64. doi: 10.1038/nn867 - DOI - PubMed
    1. Jensen TP, Zheng K, Cole N, Marvin JS, Looger LL, Rusakov DA. Multiplex imaging relates quantal glutamate release to presynaptic Ca 2+ homeostasis at multiple synapses in situ. Nature communications. 2019;10(1):1–14. doi: 10.1038/s41467-019-09216-8 - DOI - PMC - PubMed
    1. Borst JGG. The low synaptic release probability in vivo. Trends in neurosciences. 2010;33(6):259–66. doi: 10.1016/j.tins.2010.03.003 - DOI - PubMed
    1. Rusakov DA, Savtchenko LP, Latham PE. Noisy synaptic conductance: bug or a feature? Trends in neurosciences. 2020;43(6):363–72. doi: 10.1016/j.tins.2020.03.009 - DOI - PMC - PubMed

LinkOut - more resources