Resolving responsibility gaps for lethal autonomous weapon systems
- PMID: 36561376
- PMCID: PMC9766649
- DOI: 10.3389/fdata.2022.1038507
Resolving responsibility gaps for lethal autonomous weapon systems
Abstract
This paper offers a novel understanding of collective responsibility for AI outcomes that can help resolve the "problem of many hands" and "responsibility gaps" when it comes to AI failure, especially in the context of lethal autonomous weapon systems.
Keywords: ethics of artificial intelligence; ethics of technology; lethal autonomous weapon systems; military ethics; responsibility; responsibility gaps.
Copyright © 2022 Smith.
Conflict of interest statement
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
References
-
- Asaro P. (2012). On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making. Int. Rev. Red Cross 94, 687–709. 10.1017/S1816383112000768 - DOI
-
- Emery J., Brunstetter D. (2015). Drones as aerial occupation. Peace Rev. 27, 424–431. 10.1080/10402659.2015.1094319 - DOI
-
- Fabre C. (2008). Cosmopolitanism, just war theory and legitimate authority. Ethics and Int. Aff. 84, 963–976. 10.1111/j.1468-2346.2008.00749.x - DOI
-
- Galliot J. (2015). Military Robots: Mapping the Moral Landscape. Surrey: Ashgate Publishing. 10.4324/9781315595443 - DOI
LinkOut - more resources
Full Text Sources