Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jul 31:17:1230002.
doi: 10.3389/fnins.2023.1230002. eCollection 2023.

Sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks

Affiliations

Sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks

Youngeun Kim et al. Front Neurosci. .

Abstract

Spiking Neural Networks (SNNs) have gained increasing attention as energy-efficient neural networks owing to their binary and asynchronous computation. However, their non-linear activation, that is Leaky-Integrate-and-Fire (LIF) neuron, requires additional memory to store a membrane voltage to capture the temporal dynamics of spikes. Although the required memory cost for LIF neurons significantly increases as the input dimension goes larger, a technique to reduce memory for LIF neurons has not been explored so far. To address this, we propose a simple and effective solution, EfficientLIF-Net, which shares the LIF neurons across different layers and channels. Our EfficientLIF-Net achieves comparable accuracy with the standard SNNs while bringing up to ~4.3× forward memory efficiency and ~21.9× backward memory efficiency for LIF neurons. We conduct experiments on various datasets including CIFAR10, CIFAR100, TinyImageNet, ImageNet-100, and N-Caltech101. Furthermore, we show that our approach also offers advantages on Human Activity Recognition (HAR) datasets, which heavily rely on temporal information. The code has been released at https://github.com/Intelligent-Computing-Lab-Yale/EfficientLIF-Net.

Keywords: energy-efficient deep learning; event-based processing; image recognition; neuromorphic computing; spiking neural network.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
Motivation of our work. Left: Comparison between neurons in ANNs and SNNs: Unlike ReLU neurons, which do not require any parameters, LIF neurons maintain a membrane potential with voltage values that change across timesteps. Right: Memory cost breakdown for the Spiking-ResNet19 architecture during inference on an image with a resolution of 224 × 224.
Figure 2
Figure 2
Illustration of the proposed EfficientLIF-Net. (A) Conventional SNNs where each layer and channel has separate LIF neurons. (B–D) is our proposed EfficientLIF-Net which shares LIF neurons across layer, channel, and layer & channel. (A) Baseline SNN. (B) Cross-layer sharing. (C) Cross-channel sharing. (D) Cross-layer & channel sharing.
Figure 3
Figure 3
Illustration of an unrolled computational graph for the backpropagation. Black solid arrows and gray dotted arrows represent forward and backward paths, respectively. For simplicity, we omit the reset path from the spike output. (A) Baseline SNN. (B) Cross-layer sharing. (C) Cross-channel sharing.
Figure 4
Figure 4
Memory-efficient backpropagation. Compared to baseline, we do not need to store an intermediate membrane potential for backpropagation. Instead, we perform a reverse computation on the membrane potential from the next layers/channels. (A) Baseline. (B) Cross-layer. (C) Cross-channel.
Figure 5
Figure 5
Visualization of the potential hardware mapping of the two sharing methods. We provide some hardware insights on the potential hardware benefits we can get from the EfficientLIF-Net. (A) Cross-layer sharing. (B) Cross-channel sharing.
Figure 6
Figure 6
Analysis on Training Dynamics. Unit: accuracy (%). We investigate whether the trained weight parameters can be compatible with other architectures. (A) CIFAR10. (B) TinyImageNet.
Figure 7
Figure 7
(A) Spike rate analysis on four public datasets. (B) Comparison of the memory breakdown between the baseline SNN and the EfficientLIF-Net in both forward and backward. We use ResNet19 architecture on ImageNet-100.
Figure 8
Figure 8
Experiments on ResNet19 EfficientLIF-Net with weight pruning methods on CIFAR10. Left: Most LIF neurons generate output spikes although the weight sparsity increases. Therefore, the LIF memory cost cannot be reduced by weight pruning. Right: Accuracy and LIF memory cost comparison across baseline and EfficientLIF-Net. The weight memory cost across all models is ~5MB indicated with a gray dotted line.
Figure 9
Figure 9
Top: The breakdown of computation for the Baseline SNN, EfficientLIF-Net[C#2], and EfficientLIF-Net[C#4] in a 128 PE array implemented on SATA . Bottom: Comparison of DRAM access reduction between the Baseline SNN and EfficientLIF-Net[Layer] on VGG-16 across various datasets. The reduction is contrasted for single batch processing and multiple mini-batch processing scenarios.

References

    1. Akopyan F., Sawada J., Cassidy A., Alvarez-Icaza R., Arthur J., Merolla P., et al. . (2015). Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. IEEE Transac. Comput. Aided Des. Integr. Circ. Syst. 34, 1537–1557. 10.1109/TCAD.2015.2474396 - DOI
    1. Anguita D., Ghio A., Oneto L., Parra X., Reyes-Ortiz J. L., et al. . (2013). “A public domain dataset for human activity recognition using smartphones,” in Esann 3.
    1. Avilés-Cruz C., Ferreyra-Ramírez A., Zúñiga-López A., Villegas-Cortéz J. (2019). Coarse-fine convolutional deep-learning strategy for human activity recognition. Sensors 19, 1556. 10.3390/s19071556 - DOI - PMC - PubMed
    1. Che K., Leng L., Zhang K., Zhang J., Meng Q., Cheng J., et al. . (2022). “Differentiable hierarchical and surrogate gradient search for spiking neural networks,” in Advances in Neural Information Processing Systems 24975–24990.
    1. Chen Y., Yu Z., Fang W., Huang T., Tian Y. (2021). Pruning of deep spiking neural networks through gradient rewiring. arXiv preprint arXiv:2105.04916.