Explaining Black Box Drug Target Prediction Through Model Agnostic Counterfactual Samples
- PMID: 35820003
- DOI: 10.1109/TCBB.2022.3190266
Explaining Black Box Drug Target Prediction Through Model Agnostic Counterfactual Samples
Abstract
Many high-performance DTA deep learning models have been proposed, but they are mostly black-box and thus lack human interpretability. Explainable AI (XAI) can make DTA models more trustworthy, and allows to distill biological knowledge from the models. Counterfactual explanation is one popular approach to explaining the behaviour of a deep neural network, which works by systematically answering the question "How would the model output change if the inputs were changed in this way?". We propose a multi-agent reinforcement learning framework, Multi-Agent Counterfactual Drug-target binding Affinity (MACDA), to generate counterfactual explanations for the drug-protein complex. Our proposed framework provides human-interpretable counterfactual instances while optimizing both the input drug and target for counterfactual generation at the same time. We benchmark the proposed MACDA framework using the Davis and PDBBind dataset and find that our framework produces more parsimonious explanations with no loss in explanation validity, as measured by encoding similarity. We then present a case study involving ABL1 and Nilotinib to demonstrate how MACDA can explain the behaviour of a DTA model in the underlying substructure interaction between inputs in its prediction, revealing mechanisms that align with prior domain knowledge.
Similar articles
-
Explaining the black-box smoothly-A counterfactual approach.Med Image Anal. 2023 Feb;84:102721. doi: 10.1016/j.media.2022.102721. Epub 2022 Dec 13. Med Image Anal. 2023. PMID: 36571975 Free PMC article.
-
Model agnostic generation of counterfactual explanations for molecules.Chem Sci. 2022 Feb 16;13(13):3697-3705. doi: 10.1039/d1sc05259d. eCollection 2022 Mar 30. Chem Sci. 2022. PMID: 35432902 Free PMC article.
-
Toward explainable AI (XAI) for mental health detection based on language behavior.Front Psychiatry. 2023 Dec 7;14:1219479. doi: 10.3389/fpsyt.2023.1219479. eCollection 2023. Front Psychiatry. 2023. PMID: 38144474 Free PMC article.
-
Utilization of model-agnostic explainable artificial intelligence frameworks in oncology: a narrative review.Transl Cancer Res. 2022 Oct;11(10):3853-3868. doi: 10.21037/tcr-22-1626. Transl Cancer Res. 2022. PMID: 36388027 Free PMC article. Review.
-
Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks.Comput Biol Med. 2023 Apr;156:106668. doi: 10.1016/j.compbiomed.2023.106668. Epub 2023 Feb 18. Comput Biol Med. 2023. PMID: 36863192 Review.
Cited by
-
Leveraging artificial intelligence to advance implementation science: potential opportunities and cautions.Implement Sci. 2024 Feb 21;19(1):17. doi: 10.1186/s13012-024-01346-y. Implement Sci. 2024. PMID: 38383393 Free PMC article.
MeSH terms
LinkOut - more resources
Full Text Sources
Other Literature Sources
Miscellaneous