Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Nov:143:550-563.
doi: 10.1016/j.neunet.2021.06.031. Epub 2021 Jul 6.

Transfer-RLS method and transfer-FORCE learning for simple and fast training of reservoir computing models

Affiliations
Free article

Transfer-RLS method and transfer-FORCE learning for simple and fast training of reservoir computing models

Hiroto Tamura et al. Neural Netw. 2021 Nov.
Free article

Abstract

Reservoir computing is a machine learning framework derived from a special type of recurrent neural network. Following recent advances in physical reservoir computing, some reservoir computing devices are thought to be promising as energy-efficient machine learning hardware for real-time information processing. To realize efficient online learning with low-power reservoir computing devices, it is beneficial to develop fast convergence learning methods with simpler operations. This study proposes a training method located in the middle between the recursive least squares (RLS) method and the least mean squares (LMS) method, which are standard online learning methods for reservoir computing models. The RLS method converges fast but requires updates of a huge matrix called a gain matrix, whereas the LMS method does not use a gain matrix but converges very slow. On the other hand, the proposed method called a transfer-RLS method does not require updates of the gain matrix in the main-training phase by updating that in advance (i.e., in a pre-training phase). As a result, the transfer-RLS method can work with simpler operations than the original RLS method without sacrificing much convergence speed. We numerically and analytically show that the transfer-RLS method converges much faster than the LMS method. Furthermore, we show that a modified version of the transfer-RLS method (called transfer-FORCE learning) can be applied to the first-order reduced and controlled error (FORCE) learning for a reservoir computing model with a closed-loop, which is challenging to train.

Keywords: FORCE learning; Online supervised learning; Recurrent neural networks; Recursive least squares method; Reservoir computing.

PubMed Disclaimer

Conflict of interest statement

Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Similar articles

Cited by

LinkOut - more resources