Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 1999;10(2):272-83.
doi: 10.1109/72.750553.

Fixed-weight on-line learning

Affiliations

Fixed-weight on-line learning

A S Younger et al. IEEE Trans Neural Netw. 1999.

Abstract

Conventional artificial neural networks perform functional mappings from their input space to their output space. The synaptic weights encode information about the mapping in a manner analogous to long-term memory in biological systems. This paper presents a method of designing neural networks where recurrent signal loops store this knowledge in a manner analogous to short-term memory. The synaptic weights of these networks encode a learning algorithm. This gives these networks the ability to dynamically learn any functional mapping from a (possibly very large) set, without changing any synaptic weights. These networks are adaptive dynamic systems. Learning is online continually taking place as part of the network's overall behavior instead of a separate, externally driven process. We present four higher order fixed-weight learning networks. Two of these networks have standard backpropagation embedded in their synaptic weights. The other two utilize a more efficient gradient-descent-based learning rule. This new learning scheme was discovered by examining variations in fixed-weight topology. We present empirical tests showing that all these networks were able to successfully learn functions from both discrete (Boolean) and continuous function sets. Largely, the networks were robust with respect to perturbations in the synaptic weights. The exception was the recurrent connections used to store information. These required a tight tolerance of 0.5%. We found that the cost of these networks scaled approximately in proportion to the total number of synapses. We consider evolving fixed weight networks tailored to a specific problem class by analyzing the meta-learning cost surface of the networks presented.

PubMed Disclaimer

LinkOut - more resources