Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Jul 26:11:1401677.
doi: 10.3389/frobt.2024.1401677. eCollection 2024.

SNN4Agents: a framework for developing energy-efficient embodied spiking neural networks for autonomous agents

Affiliations

SNN4Agents: a framework for developing energy-efficient embodied spiking neural networks for autonomous agents

Rachmad Vidya Wicaksana Putra et al. Front Robot AI. .

Abstract

Recent trends have shown that autonomous agents, such as Autonomous Ground Vehicles (AGVs), Unmanned Aerial Vehicles (UAVs), and mobile robots, effectively improve human productivity in solving diverse tasks. However, since these agents are typically powered by portable batteries, they require extremely low power/energy consumption to operate in a long lifespan. To solve this challenge, neuromorphic computing has emerged as a promising solution, where bio-inspired Spiking Neural Networks (SNNs) use spikes from event-based cameras or data conversion pre-processing to perform sparse computations efficiently. However, the studies of SNN deployments for autonomous agents are still at an early stage. Hence, the optimization stages for enabling efficient embodied SNN deployments for autonomous agents have not been defined systematically. Toward this, we propose a novel framework called SNN4Agents that consists of a set of optimization techniques for designing energy-efficient embodied SNNs targeting autonomous agent applications. Our SNN4Agents employs weight quantization, timestep reduction, and attention window reduction to jointly improve the energy efficiency, reduce the memory footprint, optimize the processing latency, while maintaining high accuracy. In the evaluation, we investigate use cases of event-based car recognition, and explore the trade-offs among accuracy, latency, memory, and energy consumption. The experimental results show that our proposed framework can maintain high accuracy (i.e., 84.12% accuracy) with 68.75% memory saving, 3.58x speed-up, and 4.03x energy efficiency improvement as compared to the state-of-the-art work for the NCARS dataset. In this manner, our SNN4Agents framework paves the way toward enabling energy-efficient embodied SNN deployments for autonomous agents.

Keywords: automotive data; autonomous agents; energy efficiency; neuromorphic computing; neuromorphic processor; spiking neural networks.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

FIGURE 1
FIGURE 1
Experimental results for observing the impact of different weight precision levels (i.e., 32, 12, 8, and 4 bits) on: (A) the accuracy of an SNN model across the training epochs; and (B) the accuracy after 200 training epoch and the corresponding memory footprints.
FIGURE 2
FIGURE 2
The overview of our novel contributions, shown in green boxes.
FIGURE 3
FIGURE 3
Overview of the functionality of a Spiking Neural Network.
FIGURE 4
FIGURE 4
The flow of (A) PTQ, Post-Training Quantization, and (B) QAT, Quantization-Aware Training.
FIGURE 5
FIGURE 5
Illustration of different rounding schemes: Truncation (TR), Rounding-to-the-Nearest (RN), and Stochastic Rounding (SR); based on studies in (Putra and Shafique, 2021a).
FIGURE 6
FIGURE 6
The flow of our SNN4Agents framework, with the technical contributions highlighted in green.
FIGURE 7
FIGURE 7
Results of accuracy across different precision levels (i.e., 32, 16, 12, 10, 8, and 4 bit).
FIGURE 8
FIGURE 8
Results of accuracy across different timestep settings (i.e., 20, 15, 10, and 5), while considering different precision levels (i.e., 32, 16, and 4 bit). Here, B b_ T t denotes B bit precision and T timestep.
FIGURE 9
FIGURE 9
Results of accuracy across different attention window sizes (i.e., 100×100 and 50×50 ).
FIGURE 10
FIGURE 10
Experimental setup for evaluating our SNN4Agents framework. The proposed settings from our SNN4Agents are incorporated into the experimental setup as highlighted in green.
FIGURE 11
FIGURE 11
Experimental results for accuracy across different parameter settings, including precision levels, timesteps, and training epochs under (A) 100×100 attention window, and (B) 50×50 attention window.
FIGURE 12
FIGURE 12
Experimental results for memory footprint normalized to the baseline (32b_100w) across different precision levels and attention window sizes (i.e., 100×100 and 50×50 ).
FIGURE 13
FIGURE 13
Experimental results for latency normalized to the baseline (32b_20t) across different parameter settings, including precision levels, timesteps, and training epochs under (A) 100×100 attention window, and (B) 50×50 attention window.
FIGURE 14
FIGURE 14
Experimental results for energy consumption normalized to the baseline (32b_20t) across different parameter settings, including precision levels, timesteps, and training epochs under (A) 100×100 attention window, and (B) 50×50 attention window.
FIGURE 15
FIGURE 15
(A) Trade-off analysis for accuracy vs memory, accuracy vs normalized latency, and accuracy vs normalized energy consumption. (B) Computational complexity of different designs across different attention window sizes (i.e., 100×100 and 50×50 ) and timestep settings (i.e., 20, 15, 10, and 5).

References

    1. Bano I., Putra R. V. W., Marchisio A., Shafique M. (2024). A methodology to study the impact of spiking neural network parameters considering event-based automotive data. arXiv Prepr. arXiv:2404.03493.
    1. Bartolozzi C., Indiveri G., Donati E. (2022). Embodied neuromorphic intelligence. Nat. Commun. 13, 1024. 10.1038/s41467-022-28487-2 - DOI - PMC - PubMed
    1. Bonnevie R., Duberg D., Jensfelt P. (2021). “Long-term exploration in unknown dynamic environments,” in 2021 7th International Conference on Automation, Robotics and Applications (ICARA), 32–37. 10.1109/ICARA51699.2021.9376367 - DOI
    1. Bu T., Fang W., Ding J., Dai P., Yu Z., Huang T. (2022). “Optimal ANN-SNN conversion for high-accuracy and ultra-low-latency spiking neural networks,” in International Conference on Learning Representations.
    1. Chowdhury S. S., Rathi N., Roy K. (2021). One timestep is all you need: training spiking neural networks with ultra low latency. Corr. abs/2110, 05929.

LinkOut - more resources