Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2022 Feb 23;13(1):1024.
doi: 10.1038/s41467-022-28487-2.

Embodied neuromorphic intelligence

Affiliations
Review

Embodied neuromorphic intelligence

Chiara Bartolozzi et al. Nat Commun. .

Erratum in

Abstract

The design of robots that interact autonomously with the environment and exhibit complex behaviours is an open challenge that can benefit from understanding what makes living beings fit to act in the world. Neuromorphic engineering studies neural computational principles to develop technologies that can provide a computing substrate for building compact and low-power processing systems. We discuss why endowing robots with neuromorphic technologies - from perception to motor control - represents a promising approach for the creation of robots which can seamlessly integrate in society. We present initial attempts in this direction, highlight open challenges, and propose actions required to overcome current limitations.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Fig. 1
Fig. 1. Robots with end-to-end neuromorphic intelligence.
Some non exhaustive examples of perception (magenta), intelligent behaviour (green) up to action execution (blue) that would all be implemented by means of dedicated Spiking Neural Network (SNN) hardware technology. iCub picture ©IIT author Agnese Abrusci.
Fig. 2
Fig. 2. Neuromorphic sensing for robots.
a the iCub robot (picture ©IIT author Duilio Farina) is a platform for integrating neuromorphic sensors. Magenta boxes show neuromorphic sensors that acquire continuous physical signals and encode them in spike trains (vision, audition, touch). All other sensors, that monitor the state of the robot and of its collaborators, rely on clocked acquisition (green boxes), that can be converted to spike encoding by means of Field Programmable Gate Arrays (FPGAs) or sub-threshold mixed-mode devices. b The output of event-driven sensors can be sent to Spiking Neural Networks (SNNs) (with learning and recurrent connections) for processing. VISION box in (a): Event-driven vision sensors produce “streams of events” (green for light to dark changes, magenta for dark to light changes). The trajectory of a bouncing ball can be observed continuously over space, with microsecond temporal resolution (black rectangles represent sampling of a 30 fps camera). Table: Event-driven vision sensors evolved from the Dynamic Vision Sensor (DVS) with only “change detecting” pixels - to higher resolution versions with absolute light intensity measurements. The Dynamic and Active pixel VIsion Sensor (DAVIS) acquires intensity frames at low frame rate simultaneously to the “change detection” (with minor cross talk and artefacts on the event stream during the frame trigger). The Asynchronous Temporal Imaging Sensor (ATIS) samples absolute light intensity only for those pixels that detect a change. The CeleX5 offers either frame-based or event-driven readout (with a few milliseconds delay between the two, resulting in loss of event stream data during a frame acquisition). Similar to the DAVIS, the Rino3 captures events and intensity frames simultaneously, however, it employs a synchronised readout architecture as opposed to the asynchronous readout typically found in other event-driven sensors. The ultimate solution combining frames and events is yet to be found. Merging two stand-alone sensors in a single optical setup poses severe challenges in terms of the development of optics that trade-off luminosity with bulkiness. Merging two types of acquisition on the same sensor limits the fill-in factor and increases noise and interference between frames and events.
Fig. 3
Fig. 3. AER: example of communication between an event-driven sensor (triangular skin patches, each with 6 sensing areas) and a spiking neural network (SNN) chip.
Each sensing element emits asynchronous spikes that are sent to a bus through arbitration. The same are de-multiplexed to be sent to the correct synapse of the SNN chip.

References

    1. Barrett, L. Beyond the Brain: How Body and Environment Shape Animal 5and Human Minds (Princeton University Press, 2011). 10.1515/9781400838349. Barrett provides an in-depth overview on what shapes human and animal’s intelligent behaviour, exploiting their brains, but also bodies and environment. She describes how physical structure contributes to cognition, and how it employs materials and resources in specific environments.
    1. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–444. - PubMed
    1. Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw. 2015;61:85–117. - PubMed
    1. Sejnowski, T. J. The unreasonable effectiveness of deep learning in artificial intelligence. Proc. Natl Acad. Sci. (2020). https://www.pnas.org/content/early/2020/01/23/1907373117.full.pdf. - PMC - PubMed
    1. Jordan, M. I. Artificial intelligence—the revolution hasn’t happened yet. Harvard Data Sci. Rev.1 (2019-07-01). https://hdsr.mitpress.mit.edu/pub/wot7mkc1.

Publication types