Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2015:2015:297672.
doi: 10.1155/2015/297672. Epub 2015 Nov 22.

MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

Affiliations

MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

Yang Liu et al. Comput Intell Neurosci. 2015.

Abstract

Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

PubMed Disclaimer

Figures

Figure 1
Figure 1
The structure of a typical BPNN.
Figure 2
Figure 2
MRBPNN_1 architecture.
Figure 3
Figure 3
MRBPNN_2 architecture.
Figure 4
Figure 4
MRBPNN_3 structure.
Figure 5
Figure 5
The precision of MRBPNN_1 on the two datasets.
Figure 6
Figure 6
The precision of MRBPNN_2 on the two datasets.
Figure 7
Figure 7
The precision of MRBPNN_3 on the two datasets.
Figure 8
Figure 8
Precision comparison of the three parallel BPNNs.
Figure 9
Figure 9
The stability of the three parallel BPNNs.
Figure 10
Figure 10
Computation efficiency of MRBPNN_1.
Figure 11
Figure 11
Computation efficiency of MRBPNN_2.
Figure 12
Figure 12
Computation efficiency of MRBPNN_3.
Algorithm 1
Algorithm 1
MRBPNN_1.
Algorithm 2
Algorithm 2
MRBPNN_2.
Algorithm 3
Algorithm 3
MRBPNN_3.

References

    1. Networked European Software and Services Initiative (NESSI) Big data, a new world of opportunities. Networked European Software and Services Initiative (NESSI) White Paper. 2012 http://www.nessi-europe.com/Files/Private/NESSI_WhitePaper_BigData.pdf.
    1. Zikopoulos P. C., Eaton C., deRoos D., Deutsch T., Lapis G. Understanding Big Data, Analytics for Enterprise Class Hadoop and Streaming Data. McGraw-Hill; 2012.
    1. Hagan M. H., Demuth H. B., Beale M. H. Neural Network Design. PWS Publishing; 1996.
    1. Gu R., Shen F., Huang Y. A parallel computing platform for training large scale neural networks. Proceedings of the IEEE International Conference on Big Data; October 2013; pp. 376–384. - DOI
    1. Kumar V., Grama A., Gupta A., Karypis G. Introduction to Parallel Computing. San Francisco, Calif, USA: Benjamin Cummings/Addison Wesley; 2002.

Publication types

LinkOut - more resources