Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Dec 7;118(49):e2102158118.
doi: 10.1073/pnas.2102158118.

Compensatory variability in network parameters enhances memory performance in the Drosophila mushroom body

Affiliations

Compensatory variability in network parameters enhances memory performance in the Drosophila mushroom body

Nada Y Abdelrahman et al. Proc Natl Acad Sci U S A. .

Abstract

Neural circuits use homeostatic compensation to achieve consistent behavior despite variability in underlying intrinsic and network parameters. However, it remains unclear how compensation regulates variability across a population of the same type of neurons within an individual and what computational benefits might result from such compensation. We address these questions in the Drosophila mushroom body, the fly's olfactory memory center. In a computational model, we show that under sparse coding conditions, memory performance is degraded when the mushroom body's principal neurons, Kenyon cells (KCs), vary realistically in key parameters governing their excitability. However, memory performance is rescued while maintaining realistic variability if parameters compensate for each other to equalize KC average activity. Such compensation can be achieved through both activity-dependent and activity-independent mechanisms. Finally, we show that correlations predicted by our model's compensatory mechanisms appear in the Drosophila hemibrain connectome. These findings reveal compensatory variability in the mushroom body and describe its computational benefits for associative memory.

Keywords: Drosophila; associative memory; homeostatic plasticity; mushroom body.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interest.

Figures

Fig. 1.
Fig. 1.
Schematic for the mushroom body network model. PNs in the input layer relay the odor responses, xi, downstream to the KCs (yj). KCs connect randomly to the PNs with synaptic weights wji and receive global inhibition from the APL neuron with weight αj. Learning occurs when DANs carrying punishment (reward) signals from the environment depress the synapses (vj) between the active KCs and the MBONs that lead to approach (avoidance) behavior.
Fig. 2.
Fig. 2.
Inter-KC variability in w,N, and θ degrades the model fly’s memory performance. (A) Histograms of the experimentally measured distributions for (A1) w (amplitude of spontaneous excitatory postsynaptic potentials in KCs; millivolts) (data are from ref. 27), (A2) N (number of PN inputs per KC; measured as the number of dendritic “claws”) (data are from ref. 28), and (A3) θ (spiking threshold minus resting potential; mV) (data are from ref. 27). The overlaid black curves show log-normal (w) and Gaussian (N, θ) fits to the data. (B) The model fly’s memory performance (given 100 input odors), varying the parameters step by step. Fixed and variable parameters are shown by empty and filled circles, respectively. The homogeneous model (all parameters fixed, N=6; black) performs the best, and the random model (all parameters variable; red) performs the worst. All bars are significantly different from each other unless they share the same letter annotations (a, b, etc.). P < 0.05 by Wilcoxon signed rank test (for matched models with the same PN–KC connectivity) or Mann–Whitney test (for unmatched models with different PN–KC connectivity; i.e., fixed vs. variable N), with Holm–Bonferroni correction for multiple comparisons (full statistics are in Dataset S1). n = 30 model instances with different random PN–KC connectivity. (C) The performance trend is consistent over a range of different conditions: (C1) the number of input odors; (C2) the learning rate used to update KC–MBON weights; (C3) the amount of noise in PN activity (half, the same, or double the noise measured in ref. 35); and (C4) the indeterminacy in the decision making, quantified by log(c), where c is the constant in the softmax function (SI Appendix, Eq. 21). The vertical dotted lines indicate the conditions used in B (each condition used the best learning rate). (D) As KCs receive more inputs (thus, more similar inputs), inter-KC variability becomes helpful, not harmful, to memory performance, especially when all KCs receive the same inputs (N=24). Blue, KCs vary in excitatory weights (w); red, KCs vary in both w and thresholds (θ). Data for N=6 are equivalent to B. n=30. (E) Inter-KC variability improves performance in dense coding regimes (coding levels 0.7 to 0.9) at classifying 100 odors (a hard task) or 20 odors (easy task). Left of the dashed line is equivalent to B for comparison. Right of the dashed line is increasing coding levels, in each case without inhibition (because inhibition is constrained to decrease coding level by half, which is impossible if coding level >0.5). n = 50. Error bars show 95% CIs. *P < 0.05, Wilcoxon signed rank test (D) or Mann–Whitney test (E) with Holm–Bonferroni correction for multiple comparisons.
Fig. 3.
Fig. 3.
Performance depends on KC lifetime sparseness. (A1 and B1) Diagrams of angular distance between odors (i.e., between centroids of clusters of noisy trials; A1) and dimensionality of a system with three variables (B1). The system with its states scattered throughout three-dimensional space (green) has dimensionality 3, while the system with all states on a single line (magenta) has dimensionality 1. (A2 and B2) The homogeneous model has higher angular distance and dimensionality than the random model (P < 0.05, Mann–Whitney test), matching the performance difference when coding level is 0.1 but the opposite trend to performance when coding level is 0.9. CL, coding level; Homog., homogeneous. (C and D) cdf of the lifetime sparseness (C) or valence specificity (D) of KCs in the homogeneous (black) and random (red) models across 50 model instantiations. The gap between 1.0 and the top of the cdf represents silent KCs (lifetime sparseness and specificity undefined). At coding level 0.1, the random model has many more silent KCs, nonsparse KCs, and nonspecific KCs than the homogeneous model, but at coding level 0.9, the random model has more KCs with high lifetime sparseness and more KCs with high valence specificity. (E) High lifetime sparseness enables high valence specificity, although many sparse KCs have low valence specificity because of random valence assignments (data here are from single model instances). (F) Removing the sparsest or most valence-specific KCs (corresponding to the dashed horizontal lines in C and D) removes the performance advantage of the random model under dense coding. Hom., homogeneous; Rand., random. n=50 network instantiations. Error bars are 95% CIs (horizontal error bars in A2 and B2 are smaller than the symbols). These results are from the 20-odor task in Fig. 2E; SI Appendix, Fig. S2 shows results of the 100-odor task. *P < 0.05, Mann–Whitney test (Dataset S1).
Fig. 4.
Fig. 4.
Compensation in network parameters rescues memory performance. (A) Schematics of different compensation methods. (A1) Activity-independent compensation. Log-normal fit of experimental distribution of the synaptic weights (Exp.; red) and its component distributions for different N and θ for high N = 7 (dashed) or low N=2 (solid). Shades of gray indicate different values of θ. (A2–A4) Mechanisms for activity-dependent homeostatic compensation. Overly active KCs weaken excitatory input weights (wji; A2), strengthen inhibitory input weights (αj; A3), or raise spiking thresholds (θj; A4). Inactive KCs do the reverse. (B1) Compensation rescues performance, alleviating the defect caused by inter-KC variability in the random model (red) compared with the homogeneous model (black) whether compensation occurs by setting w according to N and θ (cyan; A1) or using activity-dependent homeostatic compensation to adjust excitatory weights (blue; A2), inhibitory weights (green; A3), or spiking thresholds (magenta; A4). (B2) Differences between models are more apparent when the task is more difficult due to more stochastic decision making (c = 1 instead of c = 10 in the softmax function). (C) Compensation reduces variability in KC lifetime sparseness. n = 20 model instances with different random PN–KC connectivity; error bars are 95% CIs. All bars are significantly different from each other unless they share the same letter annotations; P < 0.05 by Wilcoxon signed rank test (for matched models with the same PN–KC connectivity) or Mann–Whitney test (for unmatched models with different PN–KC connectivity; i.e., fixed vs. variable N), with Holm–Bonferroni correction for multiple comparisons (full statistics are in Dataset S1). Annotations below bars indicate whether parameters were fixed (empty circles), variable (filled circles), or variable following a compensation rule [“H” for homeostatic tuning; f(N,θ) for activity-independent tuning]. Results here are for 100 synthetic odors; SI Appendix, Fig. S1B shows similar results with odors from ref. . (D) KC excitatory input synaptic weights (w) after tuning to equalize average activity (blue) follow a similar distribution to experimental data (black) (from Fig. 2A1). (E) KC spiking thresholds (θ) after tuning to equalize average activity (magenta) have wider variability than the experimental distribution (black) (from Fig. 2A3). (F) Tuning KC inhibitory weights (α) to equalize average activity requires many inhibitory weights to be negative, unless the coding level without inhibition is as high as 99%.
Fig. 5.
Fig. 5.
Robustness of pretuned compensations with novel odors. (A) For each model fly, network parameters are tuned as in Fig. 4 on a subset of odors. At this stage, no rewards or punishments are given, and KC output weights are not modified. Then, the model is trained to classify rewarded and punished odors that are the same as or different from the odors used for tuning. Finally, the model is tested on new noisy variants of the odors used for training. (B) Empty symbols (novel environment): models were tuned on odors from one chemical group (Gi: acids, circles; terpenes, triangles; esters, diamonds; or alcohols, squares), and then, they were trained and tested on odors from the other three groups (Gij). Each empty symbol is paired with a matched control (filled symbols) showing how that model would have fared in a familiar environment (i.e., a model tuned, trained, and tested all on the same three groups of odors that the matched novel model was trained and tested on [Gij]). (C) Models with activity-dependent compensation (blue, magenta, and green) performed significantly worse in the novel environment than familiar environments (matching indicated by connecting lines; P < 0.05, Wilcoxon signed rank test with Holm–Bonferroni correction). In contrast, models with no compensation (black and red) or activity-independent compensation (cyan) performed similarly in novel and familiar environments (P > 0.05 except for homogeneous [black] acids and random [red] terpenes) (full statistics in Dataset S1). Mean of 20 model instantiations, where each instantiation received a different permutation of odors (SI Appendix). Annotations below the graph indicate whether parameters were fixed (empty circles), variable (filled circles), or variable following a compensation rule [H for homeostatic tuning, f(N,θ) for activity-independent tuning].
Fig. 6.
Fig. 6.
Connectome analysis reveals compensatory variation in excitatory and inhibitory input strengths. (A) Example αβ-c KC (body identification 5901207528) with inputs from three PNs (yellow, green, and blue dots) and seven dendritic APL–KC synapses (red circles). The magenta circle shows the posterior boundary of the peduncle. Line widths are not to scale. (B and C) Mean synaptic weight (w) per PN–KC connection is inversely related to the number of input PNs in models that tune input weights given N and θ (B) or that tune input weights to equalize average activity levels across KCs (C). (D) In the model that tunes input inhibitory synaptic weights (α) to equalize average activity levels across KCs, inhibitory weights are directly related to the sum of excitatory weights per KC (i.e., wN). Note the negative values of α (discussed in the text). (E and F) Probability distributions of the number of synapses per PN–KC connection (E) and the number of input PNs per KC (F) in the different KCs subtypes (αβ,γ,αβ). The dashed line in E shows our threshold for counting connections as genuine. (G) Mean number of input synapses per PN–KC connection (averaged across PNs for each KC) is inversely related to the number of input PNs per KC in γ-main KCs (SI Appendix, Fig. S5 shows other KC types). (H) Mean distance of PN–KC synapses to the posterior boundary of the peduncle (presumed spike initiation zone) is directly related to the number of input PNs per KC. (I) The number of APL–KC synapses per KC is directly related to the total number of PN–KC synapses per KC. (J) Four αβ-c KCs, one from each neuroblast clone. The posterior boundary of the peduncle (magenta circles) lies where the KC axons begin to converge. (K) Grids show Pearson correlation coefficients (r) between various KC parameters for all KC subtypes tested (red, positive; blue, negative). Dots indicate P < 0.05 (Holm–Bonferroni corrected) (full statistics are in Dataset S1). Colored outlines indicate predictions of models (cyan/blue, models tuning w [G and H]; green, model tuning α [I]). Number of KCs for each subtype, from left to right, are 588, 222, 350, 220, 127, and 119. In B, C, G, and H, red dots are medians, and the widths of the violin plots represent the number of KCs in each bin. Trend lines in D, G, H, and I show linear fits to the data. D, dorsal; M, medial; P, posterior.

References

    1. Golowasch J., Goldman M. S., Abbott L. F., Marder E., Failure of averaging in the construction of a conductance-based neuron model. J. Neurophysiol. 87, 1129–1131 (2002). - PubMed
    1. Achard P., Schutter E. De, Complex parameter landscape for a complex neuron model. PLoS Comput. Biol. 2, e94 (2006). - PMC - PubMed
    1. Tobin A. E., Calabrese R. L., Endogenous and half-center bursting in morphologically-inspired models of leech heart interneurons. J. Neurophysiol. 96, 2089–2106 (2006). - PMC - PubMed
    1. Taylor A. L., Goaillard J. M., Marder E., How multiple conductances determine electrophysiological properties in a multicompartment model. J. Neurosci. 29, 5573–5586 (2009). - PMC - PubMed
    1. Marder E., Goaillard J. M., Variability, compensation and homeostasis in neuron and network function. Nat. Rev. Neurosci. 7, 563–574 (2006). - PubMed

Publication types