Initialization and self-organized optimization of recurrent neural network connectivity
- PMID: 20357891
- PMCID: PMC2801534
- DOI: 10.2976/1.3240502
Initialization and self-organized optimization of recurrent neural network connectivity
Abstract
Reservoir computing (RC) is a recent paradigm in the field of recurrent neural networks. Networks in RC have a sparsely and randomly connected fixed hidden layer, and only output connections are trained. RC networks have recently received increased attention as a mathematical model for generic neural microcircuits to investigate and explain computations in neocortical columns. Applied to specific tasks, their fixed random connectivity, however, leads to significant variation in performance. Few problem-specific optimization procedures are known, which would be important for engineering applications, but also in order to understand how networks in biology are shaped to be optimally adapted to requirements of their environment. We study a general network initialization method using permutation matrices and derive a new unsupervised learning rule based on intrinsic plasticity (IP). The IP-based learning uses only local learning, and its aim is to improve network performance in a self-organized way. Using three different benchmarks, we show that networks with permutation matrices for the reservoir connectivity have much more persistent memory than the other methods but are also able to perform highly nonlinear mappings. We also show that IP-based on sigmoid transfer functions is limited concerning the output distributions that can be achieved.
Figures
References
-
- Douglas, R, Markram, H, and Martin, K (2004). “Neocortex.” The Synaptic Organization of the Brain, 5th Ed., Shepard, G M, ed., pp. 499–558, Oxford University Press, New York.
-
- Doya, K (1995). “Recurrent networks: supervised learning.” The Handbook of Brain Theory and Neural Networks, Arbib, M A, ed., pp. 796–800, MIT Press, Cambridge, MA.
LinkOut - more resources
Full Text Sources