Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2023 Mar 11;23(6):3037.
doi: 10.3390/s23063037.

Overview of Spiking Neural Network Learning Approaches and Their Computational Complexities

Affiliations
Review

Overview of Spiking Neural Network Learning Approaches and Their Computational Complexities

Paweł Pietrzak et al. Sensors (Basel). .

Abstract

Spiking neural networks (SNNs) are subjects of a topic that is gaining more and more interest nowadays. They more closely resemble actual neural networks in the brain than their second-generation counterparts, artificial neural networks (ANNs). SNNs have the potential to be more energy efficient than ANNs on event-driven neuromorphic hardware. This can yield drastic maintenance cost reduction for neural network models, as the energy consumption would be much lower in comparison to regular deep learning models hosted in the cloud today. However, such hardware is still not yet widely available. On standard computer architectures consisting mainly of central processing units (CPUs) and graphics processing units (GPUs) ANNs, due to simpler models of neurons and simpler models of connections between neurons, have the upper hand in terms of execution speed. In general, they also win in terms of learning algorithms, as SNNs do not reach the same levels of performance as their second-generation counterparts in typical machine learning benchmark tasks, such as classification. In this paper, we review existing learning algorithms for spiking neural networks, divide them into categories by type, and assess their computational complexity.

Keywords: computational complexity; hardware; learning algorithms; spiking neural networks.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Biological neuron’s structure.
Figure 2
Figure 2
Batch processing times for different batch sizes on GPU.
Figure 3
Figure 3
Batch processing times for different batch sizes on CPU.
Figure 4
Figure 4
Batch processing times for different batch sizes on GPU.

Similar articles

Cited by

References

    1. Wang C.-Y., Bochkovskiy A., Liao H.-Y.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. arXiv. 20222207.02696
    1. Ronneberger O., Fischer P., Brox T. Medical Image Computing and Computer-Assisted Intervention (MICCAI) Volume 9351. Springer; Cham, Switzerland: 2015. U-Net: Convolutional networks for biomedical image segmentation; pp. 234–241. LNCS.
    1. Brown T., Mann B., Ryder N., Subbiah M., Kaplan J.D., Dhariwal P., Neelakantan A., Shyam P., Sastry G., Askell A., et al. Language Models Are Few-Shot Learners. In: Larochelle H., Ranzato M., Hadsell R., Balcan M.F., Lin H., editors. Advances in Neural Information Processing Systems. Volume 33. Curran Associates, Inc.; Red Hook, NJ, USA: 2020. pp. 1877–1901.
    1. LeCun Y., Bottou L., Bengio Y., Ha P. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE. 1998;86:2278–2324. doi: 10.1109/5.726791. - DOI
    1. Krizhevsky A. Learning Multiple Layers of Features from Tiny Images. [(accessed on 6 February 2023)]. Available online: https://www.cs.toronto.edu/kriz/learning-features-2009-TR.pdf.

LinkOut - more resources