The Rosenblatt Bayesian algorithm learning in a nonstationary environment
- PMID: 17385642
- DOI: 10.1109/TNN.2006.889943
The Rosenblatt Bayesian algorithm learning in a nonstationary environment
Abstract
In this letter, we study online learning in neural networks (NNs) obtained by approximating Bayesian learning. The approach is applied to Gibbs learning with the Rosenblatt potential in a nonstationary environment. The online scheme is obtained by the minimization (maximization) of the Kullback-Leibler divergence (cross entropy) between the true posterior distribution and the parameterized one. The complexity of the learning algorithm is further decreased by projecting the posterior onto a Gaussian distribution and imposing a spherical covariance matrix. We study in detail the particular case of learning linearly separable rules. In the case of a fixed rule, we observe an asymptotic generalization error e(g) infinity alpha(-1) for both the spherical and the full covariance matrix approximations. However, in the case of drifting rule, only the full covariance matrix algorithm shows a good performance. This good performance is indeed a surprise since the algorithm is obtained by projecting without the benefit of the extra information on drifting.
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources