Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2022 Feb 14:16:813555.
doi: 10.3389/fnins.2022.813555. eCollection 2022.

Neuromorphic Engineering Needs Closed-Loop Benchmarks

Affiliations
Review

Neuromorphic Engineering Needs Closed-Loop Benchmarks

Moritz B Milde et al. Front Neurosci. .

Abstract

Neuromorphic engineering aims to build (autonomous) systems by mimicking biological systems. It is motivated by the observation that biological organisms-from algae to primates-excel in sensing their environment, reacting promptly to their perils and opportunities. Furthermore, they do so more resiliently than our most advanced machines, at a fraction of the power consumption. It follows that the performance of neuromorphic systems should be evaluated in terms of real-time operation, power consumption, and resiliency to real-world perturbations and noise using task-relevant evaluation metrics. Yet, following in the footsteps of conventional machine learning, most neuromorphic benchmarks rely on recorded datasets that foster sensing accuracy as the primary measure for performance. Sensing accuracy is but an arbitrary proxy for the actual system's goal-taking a good decision in a timely manner. Moreover, static datasets hinder our ability to study and compare closed-loop sensing and control strategies that are central to survival for biological organisms. This article makes the case for a renewed focus on closed-loop benchmarks involving real-world tasks. Such benchmarks will be crucial in developing and progressing neuromorphic Intelligence. The shift towards dynamic real-world benchmarking tasks should usher in richer, more resilient, and robust artificially intelligent systems in the future.

Keywords: ATIS; DAVIS; DVS; audio; benchmarks; event-based systems; neuromorphic engineering; olfaction.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
Different modes of sensing. Sensing and consequently processing of sensory information can be divided into passive (top, A and B) vs. active (bottom, C and D), as well as open- (left, A and C) vs. closed-loop (right, B and D) sensing. Open-loop passive sensing (A) is the most prevalent form of acquiring information about the environment and subsequently using this information, e.g., to classify objects. Advantages of this approach include the one-to-one mapping of inputs and outputs and the readily available optimisation schemes that obtain such a mapping. Examples for open-loop passive sensing include surveillance applications, face recognition, object localisation, and most conventional computer vision applications. While the environment and/or the sensor could move, the trajectory itself is independent of the acquired information. Open-loop active sensing (C) is characterised by injecting energy into the environment. The acquired data is a combination of information emitted by the environment itself (black arrow) and the resulting interaction of the signal emitted by the sensor with the environment (red arrow). Prime examples of this sensing approach are LiDAR (LiDAR), RADAR, or SONAR. In the open-loop setting, the acquired information is not used to change parameters of the sensor itself. The closed-loop passive sensing strategy (B) is most commonly found in animals, including humans. While energy is solely emitted by the environment, the acquired information is used to actively change the relative position of the sensor (e.g., saccadic eye movements) or alter the sensory parameters (e.g., focus). This closed-loop approach utilises past information to make informed decisions in the future. The last sensing category is active closed-loop sensing (D) where the acquired information is used to alter the positioning and configuration of the sensor. Bats (Griffin, ; Fenton, 1984) and weakly electric fish (Flock and Wersäll, ; Hofmann et al., 2013) are prime examples from the animal kingdom that exploit this sensing style, but also artificial systems, such as adaptive LiDAR, use acquired information about the environment to perform more focused and dense information collection from subsequent measurements.
Figure 2
Figure 2
Existing datasets and benchmarks fall into two categories: open-loop benchmarks, or datasets, and closed-loop benchmarks. Supervised machine learning relies mostly on the first category, whereas reinforcement learning requires the second. Most existing neuromorphic engineering benchmarks fall in the first category. This article pleads in favour of closed-loop neuromorphic benchmarks.
Figure 3
Figure 3
Overview of existing open- and closed-loop datasets and benchmarks for conventional time-varying and neuromorphic time-continuous approaches to machine intelligence. Distribution of high-end challenges according to the research field (neuromorphic/conventional), their interactions with the environment (open- and closed-loop), and the sensing modality. Downward triangle: conventional frame-based cameras; Diamond: neuromorphic event-based cameras; Star: Combination of conventional frame-and neuromorphic event-based cameras; Pentagon: auditory sensors; Square: olfactory sensors; Triangle: LiDAR sensors; Circles: abstract games operating directly on machine code. Further details are provided in Tables 1, 2. While not being completely exhaustive, this figure underlines the gravitation of both machine and neuromorphic intelligence community towards open-loop datasets. In order to showcase and truly contribute to the advancement of machine intelligence, the neuromorphic community needs to focus their efforts on creating closed-loop neuromorphic benchmarks that are physically embedded in their environment and thus dictate a hard power and execution time constraint. While the physical set-ups in Moeys et al. (2016) and Conradt et al. (2009) could have formed the basis of closed-loop benchmarks, they were not developed as such. In Moeys et al. (2016), the set-up was used to generate an open loop static dataset and in Conradt et al. (2009), no dataset was generated. In contrast, the benchmarks advocated here would be available as physical experimental set-ups that can be accessed by the community for algorithm testing.
Figure 4
Figure 4
Schematic of the closed-loop robotic foosball setup.

References

    1. Åström K. J., Bernhardsson B. M.. (2002). Comparison of Riemann and Lebesgue sampling for first order stochastic systems, in Proceedings of the IEEE Conference on Decision and Control, Vol. 2 (Las Vegas, NV: IEEE; ), 2011–2016.
    1. Åström K. J., Murray R. M. (2010). Feedback Systems: An Introduction for Scientists and Engineers. Princeton, NJ: Princeton University Press.
    1. Abu-El-Haija S., Kothari N., Lee J., Natsev P., Toderici G., Varadarajan B., et al. . (2016). YouTube-8M: a large-scale video classification benchmark. arXiv preprint arXiv:1609.08675.
    1. Aimar A., Mostafa H., Calabrese E., Rios-Navarro A., Tapiador-Morales R., Lungu I. A., et al. . (2019). NullHop: a flexible convolutional neural network accelerator based on sparse representations of feature maps. IEEE Trans. Neural Netw. Learn. Syst. 30, 644–656. 10.1109/TNNLS.2018.2852335 - DOI - PubMed
    1. Akimov D. (2020). Distributed soft actor-critic with multivariate reward representation and knowledge distillation. arXiv preprint arXiv:1911.13056.

LinkOut - more resources