When neuro-robots go wrong: A review
- PMID: 36819005
- PMCID: PMC9935594
- DOI: 10.3389/fnbot.2023.1112839
When neuro-robots go wrong: A review
Abstract
Neuro-robots are a class of autonomous machines that, in their architecture, mimic aspects of the human brain and cognition. As such, they represent unique artifacts created by humans based on human understanding of healthy human brains. European Union's Convention on Roboethics 2025 states that the design of all robots (including neuro-robots) must include provisions for the complete traceability of the robots' actions, analogous to an aircraft's flight data recorder. At the same time, one can anticipate rising instances of neuro-robotic failure, as they operate on imperfect data in real environments, and the underlying AI behind such neuro-robots has yet to achieve explainability. This paper reviews the trajectory of the technology used in neuro-robots and accompanying failures. The failures demand an explanation. While drawing on existing explainable AI research, we argue explainability in AI limits the same in neuro-robots. In order to make robots more explainable, we suggest potential pathways for future research.
Keywords: explainability; explainable AI (X-AI); explainable neuro-robots; neuro-robotic failures; neuro-robotic models; neuro-robotic systems; responsible neuro-robots.
Copyright © 2023 Khan and Olds.
Conflict of interest statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Figures
Similar articles
-
Explainable AI: A Neurally-Inspired Decision Stack Framework.Biomimetics (Basel). 2022 Sep 9;7(3):127. doi: 10.3390/biomimetics7030127. Biomimetics (Basel). 2022. PMID: 36134931 Free PMC article.
-
Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery.Int J Med Robot. 2019 Feb;15(1):e1968. doi: 10.1002/rcs.1968. Int J Med Robot. 2019. PMID: 30397993 Review.
-
The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies.J Biomed Inform. 2021 Jan;113:103655. doi: 10.1016/j.jbi.2020.103655. Epub 2020 Dec 10. J Biomed Inform. 2021. PMID: 33309898 Review.
-
Self-Explaining Social Robots: An Explainable Behavior Generation Architecture for Human-Robot Interaction.Front Artif Intell. 2022 Apr 29;5:866920. doi: 10.3389/frai.2022.866920. eCollection 2022. Front Artif Intell. 2022. PMID: 35573901 Free PMC article.
-
Ethica ex machina: issues in roboethics.J Int Bioethique. 2013 Dec;24(4):17-26, 176-7. doi: 10.3917/jib.243.0015. J Int Bioethique. 2013. PMID: 24558732
Cited by
-
ChatGPT in finance: Applications, challenges, and solutions.Heliyon. 2024 Jan 17;10(2):e24890. doi: 10.1016/j.heliyon.2024.e24890. eCollection 2024 Jan 30. Heliyon. 2024. PMID: 38304767 Free PMC article.
References
-
- Ackerman E. (2016a). Fatal tesla self-driving car crash reminds us that robots aren’t perfect. IEEE spectrum. Available online at: https://spectrum.ieee.org/fatal-tesla-autopilot-crash-reminds-us-that-ro... (accessed January 5, 2022).
-
- Ackerman E. (2016b). This robot can do more push-ups because it sweats IEEE spectrum. Available online at: https://spectrum.ieee.org/this-robot-can-do-more-pushups-because-it-sweats (accessed January 5, 2022).
-
- Adadi A., Berrada M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 6 52138–52160. 10.1109/ACCESS.2018.2870052 - DOI
-
- Akca A., Efe M. (2019). Multiple model kalman and particle filters and applications: A survey. IFAC PapersOnLine 52 73–78. 10.1016/j.ifacol.2019.06.013 - DOI
Publication types
LinkOut - more resources
Full Text Sources
Research Materials