Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2009 Sep;3(3):243-50.
doi: 10.1007/s11571-009-9083-3. Epub 2009 May 8.

Are binary synapses superior to graded weight representations in stochastic attractor networks?

Affiliations

Are binary synapses superior to graded weight representations in stochastic attractor networks?

Jason Satel et al. Cogn Neurodyn. 2009 Sep.

Abstract

Synaptic plasticity is an underlying mechanism of learning and memory in neural systems, but it is controversial whether synaptic efficacy is modulated in a graded or binary manner. It has been argued that binary synaptic weights would be less susceptible to noise than graded weights, which has impelled some theoretical neuroscientists to shift from the use of graded to binary weights in their models. We compare retrieval performance of models using both binary and graded weight representations through numerical simulations of stochastic attractor networks. We also investigate stochastic attractor models using multiple discrete levels of weight states, and then investigate the optimal threshold for dilution of binary weight representations. Our results show that a binary weight representation is not less susceptible to noise than a graded weight representation in stochastic attractor models, and we find that the load capacities with an increasing number of weight states rapidly reach the load capacity with graded weights. The optimal threshold for dilution of binary weight representations under stochastic conditions occurs when approximately 50% of the smallest weights are set to zero.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Load capacities in attractor networks with graded and binary weights. a Scenario in which the binary weights are less susceptible to noise. b Scenario in which the effect of noise is similar for models with binary and graded weights
Fig. 2
Fig. 2
Retrieval error increases with increasing load more rapidly for binary than graded synaptic weights. Load refers to the number of patterns stored in the network, relative to the number of nodes in the system. Each inset shows the retrieval error versus load for a set amount of noise in the system using probabilistic updating. The top curves in each graph were produced by a model using binary weights, while the bottom curves were produced by a model using graded weights
Fig. 3
Fig. 3
Phase diagrams in the model with probabilistic updates (a) and static noise in the weight matrix (b) for graded and binary weight models illustrating load capacities. The vertical axis represents load and the horizontal axis represents degree of stochastisity in the systems. Error bars represent load capacities when using ±10% variations in the values of the critical overlap used to determine the transition points
Fig. 4
Fig. 4
a, b Diluted versus non-diluted models of binary weight representation. c Load capacity as a function of the number of discrete weight states for different levels of stochasticity. d Optimal dilution thresholds at load capacity for varying degrees of stochasticity. The load capacities, αc, when using the specified thresholds are shown

References

    1. {'text': '', 'ref_index': 1, 'ids': [{'type': 'DOI', 'value': '10.1103/PhysRevE.72.031914', 'is_inner': False, 'url': 'https://doi.org/10.1103/physreve.72.031914'}, {'type': 'PubMed', 'value': '16241489', 'is_inner': True, 'url': 'https://pubmed.ncbi.nlm.nih.gov/16241489/'}]}
    2. Abarbanel H, Talathi S, Gibb L, Rabinovich M (2005) Synaptic plasticity with discrete state synapses. Phys Rev E 72:031914 - PubMed
    1. {'text': '', 'ref_index': 1, 'ids': [{'type': 'DOI', 'value': '10.1109/T-C.1972.223477', 'is_inner': False, 'url': 'https://doi.org/10.1109/t-c.1972.223477'}]}
    2. Amari S (1972) Learning patterns and pattern sequences by self-organizing nets of threshold elements. IEEE Trans Comput C-21:1197
    1. {'text': '', 'ref_index': 1, 'ids': [{'type': 'DOI', 'value': '10.1162/089976603321192086', 'is_inner': False, 'url': 'https://doi.org/10.1162/089976603321192086'}, {'type': 'PubMed', 'value': '12620158', 'is_inner': True, 'url': 'https://pubmed.ncbi.nlm.nih.gov/12620158/'}]}
    2. Amit D, Mongillo G (2003) Spike-driven synaptic dynamics generating working memory states. Neural Comput 15:565 - PubMed
    1. {'text': '', 'ref_index': 1, 'ids': [{'type': 'DOI', 'value': '10.1103/PhysRevLett.55.1530', 'is_inner': False, 'url': 'https://doi.org/10.1103/physrevlett.55.1530'}, {'type': 'PubMed', 'value': '10031847', 'is_inner': True, 'url': 'https://pubmed.ncbi.nlm.nih.gov/10031847/'}]}
    2. Amit D, Gutfreund H, Sompolinsky H (1985) Storing infinite numbers of patterns in a spin-glass model of neural networks. Phys Rev Lett 55:1530 - PubMed
    1. {'text': '', 'ref_index': 1, 'ids': [{'type': 'DOI', 'value': '10.1016/0003-4916(87)90092-3', 'is_inner': False, 'url': 'https://doi.org/10.1016/0003-4916(87)90092-3'}]}
    2. Amit D, Gutfreund H., Sompolinsky H (1987) Statistical mechanics of neural networks near saturation. Ann Phys 173:30