Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016:2016:2842780.
doi: 10.1155/2016/2842780. Epub 2016 Apr 27.

Parallelizing Backpropagation Neural Network Using MapReduce and Cascading Model

Affiliations

Parallelizing Backpropagation Neural Network Using MapReduce and Cascading Model

Yang Liu et al. Comput Intell Neurosci. 2016.

Abstract

Artificial Neural Network (ANN) is a widely used algorithm in pattern recognition, classification, and prediction fields. Among a number of neural networks, backpropagation neural network (BPNN) has become the most famous one due to its remarkable function approximation ability. However, a standard BPNN frequently employs a large number of sum and sigmoid calculations, which may result in low efficiency in dealing with large volume of data. Therefore to parallelize BPNN using distributed computing technologies is an effective way to improve the algorithm performance in terms of efficiency. However, traditional parallelization may lead to accuracy loss. Although several complements have been done, it is still difficult to find out a compromise between efficiency and precision. This paper presents a parallelized BPNN based on MapReduce computing model which supplies advanced features including fault tolerance, data replication, and load balancing. And also to improve the algorithm performance in terms of precision, this paper creates a cascading model based classification approach, which helps to refine the classification results. The experimental results indicate that the presented parallelized BPNN is able to offer high efficiency whilst maintaining excellent precision in enabling large-scale machine learning.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Structure of a three-layer BPNN.
Figure 2
Figure 2
MapReduce model.
Figure 3
Figure 3
Parallelization in training phase.
Figure 4
Figure 4
Classifying one instance in classification phase.
Figure 5
Figure 5
CPBPNN structure.
Figure 6
Figure 6
Comparison of standard training in BPNN and ensemble training.
Figure 7
Figure 7
Precisions of ensemble training and standard training with increasing training instances.
Figure 8
Figure 8
(a) CPBPNN precision for Iris dataset. (b) CPBPNN precision for Wine dataset.
Figure 9
Figure 9
Efficiency comparison of CPBPNN and standalone BPNN.
Figure 10
Figure 10
Efficiency of CPBPNN with increasing data size.
Figure 11
Figure 11
Efficiency of CPBPNN with increasing number of mappers.

References

    1. Big Data, A New World of Opportunities. Networked European Software and Services Initiative (NESSI) White Paper, 2012, http://www.nessi-europe.com/Files/Private/NESSI_WhitePaper_BigData.pdf.
    1. Hagan M. H., Demuth H. B., Beale M. H. Neural Network Design. PWS Publishing Company; 1996.
    1. Dean J., Ghemawat S. MapReduce: simplified data processing on large clusters. Communications of the ACM. 2008;51(1):107–113. doi: 10.1145/1327452.1327492. - DOI
    1. Almaadeed N., Aggoun A., Amira A. Speaker identification using multimodal neural networks and wavelet analysis. IET Biometrics. 2015;4(1):18–28. doi: 10.1049/iet-bmt.2014.0011. - DOI
    1. Chen A., Ye J. Research on four-layer back propagation neural network for the computation of ship resistance. Proceedings of the IEEE International Conference on Mechatronics and Automation (ICMA '09); August 2009; Changchun, China. IEEE; pp. 2537–2541. - DOI