Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Jan 9;121(2):e2304406120.
doi: 10.1073/pnas.2304406120. Epub 2024 Jan 5.

Impossibility theorems for feature attribution

Affiliations

Impossibility theorems for feature attribution

Blair Bilodeau et al. Proc Natl Acad Sci U S A. .

Abstract

Despite a sea of interpretability methods that can produce plausible explanations, the field has also empirically seen many failure cases of such methods. In light of these results, it remains unclear for practitioners how to use these methods and choose between them in a principled way. In this paper, we show that for moderately rich model classes (easily satisfied by neural networks), any feature attribution method that is complete and linear-for example, Integrated Gradients and Shapley Additive Explanations (SHAP)-can provably fail to improve on random guessing for inferring model behavior. Our results apply to common end-tasks such as characterizing local model behavior, identifying spurious features, and algorithmic recourse. One takeaway from our work is the importance of concretely defining end-tasks: Once such an end-task is defined, a simple and direct approach of repeated model evaluations can outperform many other complex feature attribution methods.

Keywords: explainable AI; feature attribution; interpretability.

PubMed Disclaimer

Conflict of interest statement

Competing interests statement:The authors declare no competing interest.

Figures

Fig. 1.
Fig. 1.
Red arrows indicate false implications for complete and linear feature attribution methods, which follows from Theorem 2.3. Implication (A) is a standard belief in the literature for feature attribution methods, but we show it is false in general.
Fig. 2.
Fig. 2.
Each line represents a different one-dimensional model. For x=0.1 and μ=Unif(1,1), dashed lines receive SHAP(f,x,μ)=0 while solid lines receive SHAP(f,x,μ)=1. The behavior of models with the same color is identical within the shaded region, which denotes the neighborhood (xδ,x+δ) for δ=0.2. Models can behave very differently and all receive the same attribution (e.g., all dashed lines) and models can be identical in a neighborhood yet receive very different attribution within that neighborhood (e.g., lines with the same color).
Fig. 3.
Fig. 3.
Visualizing ROC curves for tabular datasets. A feature attribution method is better for an end-task if the ROC curve is closer to the top left corner on average.
Fig. 4.
Fig. 4.
Visualizing ROC curves for image datasets. A feature attribution method is better for an end-task if the ROC curve is closer to the top left corner on average.

References

    1. K. Simonyan, A. Vedaldi, A. Zisserman, Deep inside convolutional networks: Visualising image classification models and saliency maps (2013). http://arxiv.org/abs/1312.6034 (Accessed 1 September 2022).
    1. D. Smilkov, N. Thorat, B. Kim, F. Viegas, M. Wattenberg, “SmoothGrad: Removing noise by adding noise” in Proceedings of the ICML 2017 Workshop on Visualization for Deep Learning (2017).
    1. M. T. Ribeiro, S. Singh, C. Guestrin, “Why should I trust you? Explaining the predictions of any classifier” in Proceedings of the 22nd ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2016).
    1. S. M. Lundberg, S. I. Lee, “A unified approach to interpreting model predictions” in Advances in Neural Information Processing Systems (2017), vol. 31.
    1. M. Sundararajan, A. Taly, Q. Yan, “Axiomatic attribution for deep networks” in Proceedings of the 34th International Conference on Machine Learning (2017).

LinkOut - more resources