Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Oct 4;379(2207):20200363.
doi: 10.1098/rsta.2020.0363. Epub 2021 Aug 16.

Artificial intelligence explainability: the technical and ethical dimensions

Affiliations

Artificial intelligence explainability: the technical and ethical dimensions

John A McDermid et al. Philos Trans A Math Phys Eng Sci. .

Abstract

In recent years, several new technical methods have been developed to make AI-models more transparent and interpretable. These techniques are often referred to collectively as 'AI explainability' or 'XAI' methods. This paper presents an overview of XAI methods, and links them to stakeholder purposes for seeking an explanation. Because the underlying stakeholder purposes are broadly ethical in nature, we see this analysis as a contribution towards bringing together the technical and ethical dimensions of XAI. We emphasize that use of XAI methods must be linked to explanations of human decisions made during the development life cycle. Situated within that wider accountability framework, our analysis may offer a helpful starting point for designers, safety engineers, service providers and regulators who need to make practical judgements about which XAI methods to employ or to require. This article is part of the theme issue 'Towards symbiotic autonomous systems'.

Keywords: assurance; explainability; machine learning.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Context and roles of explainability. (Online version in colour.)
Figure 2.
Figure 2.
ROC Curves for the example NNs. (Online version in colour.)
Figure 3.
Figure 3.
Comparative feature importance. (a) CNN feature importance, (b) DNN feature importance. (Online version in colour.)

References

    1. Arrieta AB et al. 2020. Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82-115. (10.1016/j.inffus.2019.12.012) - DOI
    1. Bhatt U et al. 2020. Explainable machine learning in deployment. In Proc. of the 2020 Conf. on Fairness, Accountability, and Transparency, pp. 648–657.
    1. McDermid JA. 2010. Safety Critical Software. In Encyclopedia of Aerospace Engineering (eds R Blockley, W Shyy). 10.1002/9780470686652.eae506. - DOI
    1. Zhang GP. 2000. Neural networks for classification: a survey. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 30, 451-462. (10.1109/5326.897072) - DOI
    1. Wang G. 2008. A survey on training algorithms for support vector machine classifiers. In 2008 Fourth Int. Conf. on Networked Computing and Advanced Information Management, vol. 1, Gyeongju, South Korea, 2–4 September 2008, pp. 123–128. New York, NY: IEEE.

LinkOut - more resources