When neuro-robots go wrong: A review
- PMID: 36819005
- PMCID: PMC9935594
- DOI: 10.3389/fnbot.2023.1112839
When neuro-robots go wrong: A review
Abstract
Neuro-robots are a class of autonomous machines that, in their architecture, mimic aspects of the human brain and cognition. As such, they represent unique artifacts created by humans based on human understanding of healthy human brains. European Union's Convention on Roboethics 2025 states that the design of all robots (including neuro-robots) must include provisions for the complete traceability of the robots' actions, analogous to an aircraft's flight data recorder. At the same time, one can anticipate rising instances of neuro-robotic failure, as they operate on imperfect data in real environments, and the underlying AI behind such neuro-robots has yet to achieve explainability. This paper reviews the trajectory of the technology used in neuro-robots and accompanying failures. The failures demand an explanation. While drawing on existing explainable AI research, we argue explainability in AI limits the same in neuro-robots. In order to make robots more explainable, we suggest potential pathways for future research.
Keywords: explainability; explainable AI (X-AI); explainable neuro-robots; neuro-robotic failures; neuro-robotic models; neuro-robotic systems; responsible neuro-robots.
Copyright © 2023 Khan and Olds.
Conflict of interest statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Figures
References
-
- Ackerman E. (2016a). Fatal tesla self-driving car crash reminds us that robots aren’t perfect. IEEE spectrum. Available online at: https://spectrum.ieee.org/fatal-tesla-autopilot-crash-reminds-us-that-ro... (accessed January 5, 2022).
-
- Ackerman E. (2016b). This robot can do more push-ups because it sweats IEEE spectrum. Available online at: https://spectrum.ieee.org/this-robot-can-do-more-pushups-because-it-sweats (accessed January 5, 2022).
-
- Adadi A., Berrada M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 6 52138–52160. 10.1109/ACCESS.2018.2870052 - DOI
-
- Akca A., Efe M. (2019). Multiple model kalman and particle filters and applications: A survey. IFAC PapersOnLine 52 73–78. 10.1016/j.ifacol.2019.06.013 - DOI
Publication types
LinkOut - more resources
Full Text Sources
Research Materials