A review of evaluation approaches for explainable AI with applications in cardiology
- PMID: 39132011
- PMCID: PMC11315784
- DOI: 10.1007/s10462-024-10852-w
A review of evaluation approaches for explainable AI with applications in cardiology
Abstract
Explainable artificial intelligence (XAI) elucidates the decision-making process of complex AI models and is important in building trust in model predictions. XAI explanations themselves require evaluation as to accuracy and reasonableness and in the context of use of the underlying AI model. This review details the evaluation of XAI in cardiac AI applications and has found that, of the studies examined, 37% evaluated XAI quality using literature results, 11% used clinicians as domain-experts, 11% used proxies or statistical analysis, with the remaining 43% not assessing the XAI used at all. We aim to inspire additional studies within healthcare, urging researchers not only to apply XAI methods but to systematically assess the resulting explanations, as a step towards developing trustworthy and safe models.
Supplementary information: The online version contains supplementary material available at 10.1007/s10462-024-10852-w.
Keywords: AI; Cardiac; Evaluation; XAI.
© The Author(s) 2024.
Conflict of interest statement
Conflict of interestThe authors declare that they have no Conflict of interest.
Figures
References
-
- Aas K, Jullum M, Løland A (2021) Explaining individual predictions when features are dependent: more accurate approximations to Shapley values. Artif Intell 298:103502
-
- Abdullah TA, Zahid MSBM, Tang TB, Ali W, Nasser M (2022) Explainable deep learning model for cardiac arrhythmia classification. In: 2022 International conference on future trends in smart communities (ICFTSC). IEEE, pp 87–92
-
- Abdullah TA, Zahid MSM, Ali W, Hassan SU (2023) B-LIME: an improvement of lime for interpretable deep learning classification of cardiac arrhythmia from ECG signals. Processes 11(2):595
LinkOut - more resources
Full Text Sources