Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Aug 21;19(8):e1011342.
doi: 10.1371/journal.pcbi.1011342. eCollection 2023 Aug.

Efficient sampling-based Bayesian Active Learning for synaptic characterization

Affiliations

Efficient sampling-based Bayesian Active Learning for synaptic characterization

Camille Gontier et al. PLoS Comput Biol. .

Abstract

Bayesian Active Learning (BAL) is an efficient framework for learning the parameters of a model, in which input stimuli are selected to maximize the mutual information between the observations and the unknown parameters. However, the applicability of BAL to experiments is limited as it requires performing high-dimensional integrations and optimizations in real time. Current methods are either too time consuming, or only applicable to specific models. Here, we propose an Efficient Sampling-Based Bayesian Active Learning (ESB-BAL) framework, which is efficient enough to be used in real-time biological experiments. We apply our method to the problem of estimating the parameters of a chemical synapse from the postsynaptic responses to evoked presynaptic action potentials. Using synthetic data and synaptic whole-cell patch-clamp recordings, we show that our method can improve the precision of model-based inferences, thereby paving the way towards more systematic and efficient experimental designs in physiology.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1
A: Model of binomial synapse with STD. In chemical synapses, the presynaptic terminal is characterized by the presence of N vesicles containing the neurotransmitter molecules, nt of them being in the readily-releasable state [23]. Upon the arrival of a presynaptic spike, these vesicles will stochastically fuse with the plasma membrane and release their neurotransmitters into the synaptic cleft. After spike t, kt vesicles (out of the nt available ones in the readily-releasable pool) release their neurotransmitters with a probability p. Neurotransmitters will bind to postsynaptic receptors: a single release event triggers a quantal response q. The total recorded postsynaptic current yt (i.e. the output of the system) is the sum of the effects of the kt release events. After releasing, vesicles are replenished with a certain time constant τD, which determines short-term depression. B: Modelisation of the synapse an an IO-HMM [10]. C: Bayesian Active Learning applied to biology. At each time step, the response of the system (e.g. here a synapse) to artificial stimulation is recorded. This observation yt is used by the filter to compute the posterior distribution of parameters p(θ|x1:t, y1:t). Given this posterior, the controller then computes the next input xt+1* to maximize the expected gain of information of the next observation. In classical experiment design, the inputs x1:T are defined and fixed prior to the recordings.
Fig 2
Fig 2. First setting: reducing the uncertainty of estimates for a given number of observations.
A: Entropy of the posterior distribution of θ vs. number of observations for different stimulation protocols. Synthetic data were generated from a model of synapse with ground truth parameters N* = 7, p* = 0.6, q* = 1 pA, σ* = 0.2 pA, and τD*=0.25s [2]. Traces show average over 400 independent repetitions. Shaded area: standard error of the mean. B: RMSE for the same simulations. C: Histograms and scatter plot of the ISIs and the corresponding computation times for the ESB-BAL simulations. Note that the median computation time (horizontal red line) of 74ms corresponds to the time required to test 64 candidate intervals: hence, each tested interval takes approx. 1.16ms.
Fig 3
Fig 3. Assessing the effect of point-based approximations on the accuracy of ESB-BAL.
Same setting as in Fig 2. In ESB-BAL (MC θ), the integral over θ in Eq 13 is computed using MC samples instead of the point-based approximation described in Eq 14. In ESB-BAL (MC θ, y), both integrals over θ and yt+1 in Eq 13 are computed using MC samples instead of the point-based approximations described in Eqs 14 and 15.
Fig 4
Fig 4. Second setting: reducing the uncertainty of estimates for a given experiment time (effect of penalizing long ISIs on parameter estimates uncertainty and rate of information gain).
A: Posterior entropy H(Θ|ht) as a function of the stimulation number t for different values of η in Eq 18. Same settings as in Fig 2. B: Same results, but displayed as a function of time. Inset: average information rate (in bits/s) from t = 0 to t = 10s as a function of η. Results displayed for α = 0.05.
Fig 5
Fig 5. Third setting: batch optimization.
A: Schematic of how elements in St+1:t+n in Algorithm 3 are defined. They are chosen to span 3 parameters: the number m < n of spikes in the tetanic stimulation phase, the frequency f of spikes in the tetanic stimulation phase, and the duration of the final recovery ISI xlast. B: Simulated experiment with ground-truth parameters N* = 47, p* = 0.27, q* = 2.65 pA, σ* = 1.32 pA, and τD*=0.17s (i.e. the MAP values from one example cell studied in Fig 6B).
Fig 6
Fig 6. Application to neural recordings.
A: Left: Mossy fiber to granule cell synaptic connections from acute cerebellar slices of mice were studied. Each of them was stimulated using successively deterministic protocols and ESB-BAL. Right: examples of postsynaptic current traces recorded from a granule cell upon extracellular mossy fiber stimulation. B: Information gain when comparing the Deterministic (long) protocol to ESB-BAL (i.e. the entropy after the deterministic protocol minus the entropy after ESB-BAL) across all studied synapses. A positive value for ΔEntropy signifies a lower entropy when using ESB-BAL. Results displayed for different numbers of observations t. Test: regression analysis (p = 0.0381) comparing entropies after Deterministic (long) and ESB-BAL for t = 52 to t = 104 (see Materials and methods for details).

Similar articles

Cited by

References

    1. Barri A, Wang Y, Hansel D, Mongillo G. Quantifying repetitive transmission at chemical synapses: a generative-model approach. Eneuro. 2016;3(2). doi: 10.1523/ENEURO.0113-15.2016 - DOI - PMC - PubMed
    1. Bird AD, Wall MJ, Richardson MJ. Bayesian inference of synaptic quantal parameters from correlated vesicle release. Frontiers in computational neuroscience. 2016;10:116. doi: 10.3389/fncom.2016.00116 - DOI - PMC - PubMed
    1. Flesch T, Balaguer J, Dekker R, Nili H, Summerfield C. Comparing continual task learning in minds and machines. Proceedings of the National Academy of Sciences. 2018;115(44):E10313–E10322. doi: 10.1073/pnas.1800755115 - DOI - PMC - PubMed
    1. Emery AF, Nenarokomov AV. Optimal experiment design. Measurement Science and Technology. 1998;9(6):864. doi: 10.1088/0957-0233/9/6/003 - DOI
    1. Sebastiani P, Wynn HP. Maximum entropy sampling and optimal Bayesian experimental design. Journal of the Royal Statistical Society: Series B (Statistical Methodology). 2000;62(1):145–157. doi: 10.1111/1467-9868.00225 - DOI

Publication types