Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2012 Jun 22:6:39.
doi: 10.3389/fncom.2012.00039. eCollection 2012.

Recurrent network of perceptrons with three state synapses achieves competitive classification on real inputs

Affiliations

Recurrent network of perceptrons with three state synapses achieves competitive classification on real inputs

Yali Amit et al. Front Comput Neurosci. .

Abstract

We describe an attractor network of binary perceptrons receiving inputs from a retinotopic visual feature layer. Each class is represented by a random subpopulation of the attractor layer, which is turned on in a supervised manner during learning of the feed forward connections. These are discrete three state synapses and are updated based on a simple field dependent Hebbian rule. For testing, the attractor layer is initialized by the feedforward inputs and then undergoes asynchronous random updating until convergence to a stable state. Classification is indicated by the sub-population that is persistently activated. The contribution of this paper is two-fold. This is the first example of competitive classification rates of real data being achieved through recurrent dynamics in the attractor layer, which is only stable if recurrent inhibition is introduced. Second, we demonstrate that employing three state synapses with feedforward inhibition is essential for achieving the competitive classification rates due to the ability to effectively employ both positive and negative informative features.

Keywords: attractor networks; feedforward inhibition; randomized classifiers.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Architecture of network. Input retinotopic feature layer oriented edge features with units denoted fk. Attractor layer A with units ai, aj. Units of different colors correspond to different class populations Ac. Feedforward connections (FA) denoted Jkj and recurrent connections AA denoted Jij. Feedforward inhibition ηff and recurrent inhibition ηrc.
Figure 2
Figure 2
(A) Eight oriented edges. (B) Neurons respond to a particular feature at a particular location. (C) If an edge feature is detected at some pixel, neurons in the neighborhood are also activated. In this case, the neighborhood is 3 × 3.
Figure 3
Figure 3
Illustration of five edge pairs centered at a horizontal edge. There are five similar pairs for each of the other seven edge orientations.
Figure 4
Figure 4
Histograms of log probability ratios log [P(fk = 1|Class c)/P(fk = 1|Class not c)] for potentiated synapses J=2 and depressed synapses J = 0 after learning. (A) Class c = 0. (B) Class c = 4. Top: distribution for state 2 synapses. Bottom: distribution for state 0 synapses. For state 2 synapses the log-probability ratios are mostly positive, for state 0 synapses mostly negative.
Figure 5
Figure 5
Scatter plots of on-class γ and off-class β feature probabilities for all input features. (A) Class 1, (B) Class 8. There are significant differences between the two classes in the fraction of positive and negative features.
Figure 6
Figure 6
Means (blue) and standard deviations (red) of the number of synapses in the two informative states (2/0) connected to attractor neurons after learning with the base parameters. Mean over perceptrons in each class. The field dependent learning mechanisms generally create a stable number of potentiated and depressed synapses across classes.

References

    1. Amit D. J. (1989). Modelling Brain Function: the World of Attractor Neural Networks. Cambridge, UK: Cambridge University Press
    1. Amit Y. (2002). 2d Object Detection and Recognition: Models, Algorithms and Networks. Cambridge, MA: MIT Press
    1. Amit D. J., Brunel N. (1997). Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cereb. Cortex 7, 237–252 10.1093/cercor/7.3.237 - DOI - PubMed
    1. Amit D. J., Fusi S. (1994). Learning in neural networks with material synapses. Neural Comput. 6, 957–982
    1. Amit Y., Geman D. (1997). Shape quantization and recognition with randomized trees. Neural Comput. 9, 1545–1588

LinkOut - more resources