Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2023 Feb 3:17:1112839.
doi: 10.3389/fnbot.2023.1112839. eCollection 2023.

When neuro-robots go wrong: A review

Affiliations
Review

When neuro-robots go wrong: A review

Muhammad Salar Khan et al. Front Neurorobot. .

Abstract

Neuro-robots are a class of autonomous machines that, in their architecture, mimic aspects of the human brain and cognition. As such, they represent unique artifacts created by humans based on human understanding of healthy human brains. European Union's Convention on Roboethics 2025 states that the design of all robots (including neuro-robots) must include provisions for the complete traceability of the robots' actions, analogous to an aircraft's flight data recorder. At the same time, one can anticipate rising instances of neuro-robotic failure, as they operate on imperfect data in real environments, and the underlying AI behind such neuro-robots has yet to achieve explainability. This paper reviews the trajectory of the technology used in neuro-robots and accompanying failures. The failures demand an explanation. While drawing on existing explainable AI research, we argue explainability in AI limits the same in neuro-robots. In order to make robots more explainable, we suggest potential pathways for future research.

Keywords: explainability; explainable AI (X-AI); explainable neuro-robots; neuro-robotic failures; neuro-robotic models; neuro-robotic systems; responsible neuro-robots.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

FIGURE 1
FIGURE 1
IHMC’s Atlas was one of the robots that fell during the DARPA robotics challenge finals. Source: DARPA (Guizzoevan and Ackerman, 2015).

Similar articles

Cited by

References

    1. Ackerman E. (2016a). Fatal tesla self-driving car crash reminds us that robots aren’t perfect. IEEE spectrum. Available online at: https://spectrum.ieee.org/fatal-tesla-autopilot-crash-reminds-us-that-ro... (accessed January 5, 2022).
    1. Ackerman E. (2016b). This robot can do more push-ups because it sweats IEEE spectrum. Available online at: https://spectrum.ieee.org/this-robot-can-do-more-pushups-because-it-sweats (accessed January 5, 2022).
    1. Adadi A., Berrada M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 6 52138–52160. 10.1109/ACCESS.2018.2870052 - DOI
    1. Akca A., Efe M. (2019). Multiple model kalman and particle filters and applications: A survey. IFAC PapersOnLine 52 73–78. 10.1016/j.ifacol.2019.06.013 - DOI
    1. Amparore E., Perotti A., Bajardi P. (2021). To trust or not to trust an explanation: Using LEAF to evaluate local linear XAI methods. PeerJ Comput. Sci. 7:e479. 10.7717/peerj-cs.479 - DOI - PMC - PubMed