Evaluation of a parallel implementation of the learning portion of the backward error propagation neural network: experiments in artifact identification
- PMID: 1807607
- PMCID: PMC2247541
Evaluation of a parallel implementation of the learning portion of the backward error propagation neural network: experiments in artifact identification
Abstract
Various methods have been proposed in an attempt to solve problems in artifact and/or alarm identification including expert systems, statistical signal processing techniques, and artificial neural networks (ANN). ANNs consist of a large number of simple processing units connected by weighted links. To develop truly robust ANNs, investigators are required to train their networks on huge training data sets, requiring enormous computing power. We implemented a parallel version of the backward error propagation neural network training algorithm in the widely portable parallel programming language C-Linda. A maximum speedup of 4.06 was obtained with six processors. This speedup represents a reduction in total run-time from approximately 6.4 hours to 1.5 hours. We conclude that use of the master-worker model of parallel computation is an excellent method for obtaining speedups in the backward error propagation neural network training algorithm.
References
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources