Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2014 Jul 10;9(7):e101792.
doi: 10.1371/journal.pone.0101792. eCollection 2014.

On the correlation between reservoir metrics and performance for time series classification under the influence of synaptic plasticity

Affiliations

On the correlation between reservoir metrics and performance for time series classification under the influence of synaptic plasticity

Joseph Chrol-Cannon et al. PLoS One. .

Abstract

Reservoir computing provides a simpler paradigm of training recurrent networks by initialising and adapting the recurrent connections separately to a supervised linear readout. This creates a problem, though. As the recurrent weights and topology are now separated from adapting to the task, there is a burden on the reservoir designer to construct an effective network that happens to produce state vectors that can be mapped linearly into the desired outputs. Guidance in forming a reservoir can be through the use of some established metrics which link a number of theoretical properties of the reservoir computing paradigm to quantitative measures that can be used to evaluate the effectiveness of a given design. We provide a comprehensive empirical study of four metrics; class separation, kernel quality, Lyapunov's exponent and spectral radius. These metrics are each compared over a number of repeated runs, for different reservoir computing set-ups that include three types of network topology and three mechanisms of weight adaptation through synaptic plasticity. Each combination of these methods is tested on two time-series classification problems. We find that the two metrics that correlate most strongly with the classification performance are Lyapunov's exponent and kernel quality. It is also evident in the comparisons that these two metrics both measure a similar property of the reservoir dynamics. We also find that class separation and spectral radius are both less reliable and less effective in predicting performance.

PubMed Disclaimer

Conflict of interest statement

Competing Interests: The authors have declared that no competing interests exist.

Figures

Figure 1
Figure 1. Classification accuracy results for 10 initialisations for each combination of plasticity rule, connectivity method and time-series task.
Figure 2
Figure 2. Class separation results for 10 initialisations for each combination of plasticity rule, connectivity method and time-series task.
Figure 3
Figure 3. Kernel quality results for 10 initialisations for each combination of plasticity rule, connectivity method and time-series task.
Figure 4
Figure 4. Lyapunov exponent estimate results for 10 initialisations for each combination of plasticity rule, connectivity method and time-series task.
Figure 5
Figure 5. Spectral radius results for 10 initialisations for each combination of plasticity rule, connectivity method and time-series task.
Figure 6
Figure 6. Lyapunov's exponent results plotted against kernel quality in both tasks to show the similarity between the metrics.
Figure 7
Figure 7. Each of the metrics for all simulation results plotted against classification accuracy in both tasks.
This indicates the extent that each metric can be used to predict performance.
Figure 8
Figure 8. Depiction of the elements of our reservoir computing model.
I is a multi-dimensional input signal, L nodes constitute the recurrent reservoir, the x vector is the reservoir state, f is the filtering of the spike trains and y is the output after weight and sum.
Figure 9
Figure 9. Illustration of two types of connectivity model.
A uniform connection policy produces variable length chains of connections with some groups of neurons disconnected from others. A scale-free connection policy leads to a structure of a few highly connected hubs and many sparsely connected leaves.
Figure 10
Figure 10. The Bienenstock-Cooper-Munro plasticity rule illustrated with synaptic weight change on the y-scale and post-synaptic activity on the x-scale.
formula image is the sliding modification threshold that changes based on a temporal average of post-synaptic activity.
Figure 11
Figure 11. The two predominantly studied STDP learning windows.
Figure 12
Figure 12. Plot of 500 of the 50,000 data samples generated according to Jaeger's time-series benchmark .

References

    1. Lukosevicius M, Jaeger H (2009) Reservoir computing approaches to recurrent neural network training. Computer Science Review 3: 127–149.
    1. Lukosevicius M, Jaeger H, Schrauwen B (2012) Reservoir computing trends. KI - Künstliche Intelligenz 26: 365–371.
    1. Maass W, Natschlager T, Markram H (2002) Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Computation 14: 2531–2560. - PubMed
    1. Jaeger H (2001) The echo state approach to analysing and training recurrent neural networks. Technical Report 148, GMD-Forschungszentrum Informationstechnik.
    1. Yin J, Meng Y, Jin Y (2012) A developmental approach to structural self-organization in reservoir computing. IEEE Transactions on on Autonomous Mental Development 4: 273–289.

Publication types

LinkOut - more resources