Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Jan 6;15(1):962.
doi: 10.1038/s41598-024-81646-x.

A latent diffusion approach to visual attribution in medical imaging

Affiliations

A latent diffusion approach to visual attribution in medical imaging

Ammar Adeel Siddiqui et al. Sci Rep. .

Abstract

Visual attribution in medical imaging seeks to make evident the diagnostically-relevant components of a medical image, in contrast to the more common detection of diseased tissue deployed in standard machine vision pipelines (which are less straightforwardly interpretable/explainable to clinicians). We here present a novel generative visual attribution technique, one that leverages latent diffusion models in combination with domain-specific large language models, in order to generate normal counterparts of abnormal images. The discrepancy between the two hence gives rise to a mapping indicating the diagnostically-relevant image components. To achieve this, we deploy image priors in conjunction with appropriate conditioning mechanisms in order to control the image generative process, including natural language text prompts acquired from medical science and applied radiology. We perform experiments and quantitatively evaluate our results on the COVID-19 Radiography Database containing labelled chest X-rays with differing pathologies via the Frechet Inception Distance (FID), Structural Similarity (SSIM) and Multi Scale Structural Similarity Metric (MS-SSIM) metrics obtained between real and generated images. The resulting system also exhibits a range of latent capabilities including zero-shot localized disease induction, which are evaluated with real examples from the cheXpert dataset.

Keywords: Diffusion models; Explainable AI; Medical imaging; Visual Attribution.

PubMed Disclaimer

Conflict of interest statement

Declarations. Competing interests: The authors declare no competing interests.

Figures

Fig. 1
Fig. 1
The counterfactual generation pipeline takes as input the abnormal image formula image, which is then encoded by the VAE encoder (formula image) to form the encoded image latents Z and passed through the diffusion process to form noised latents of the image formula image after incremental t steps. The fine-tuned conditional U-net denoises the latents into the conditioned latent Z, decoded by the VAE decoder D into the final generated counterfactual formula image, from which a visual attribution map M(formula image) is subtractively generated.
Fig. 2
Fig. 2
Healthy Counterfactual Generation for three cases of lung opacity (Red indicates generated tissue by the model).
Fig. 3
Fig. 3
Healthy Counterfactual Generation (Red indicates generated tissue by the model).
Fig. 4
Fig. 4
Zero shot carcinoma induction.
Fig. 5
Fig. 5
Induction of Cardiomegaly in real healthy scans.
Fig. 6
Fig. 6
Induction of baseline diseases in real healthy scans (Red indicates induced scarring).
Fig. 7
Fig. 7
Example Images: Hyperparametric Elimination of Image Priors using the prompt “healthy chest scan”.
Fig. 8
Fig. 8
Elimination of the Text Encoder, with a healthy scan as image prior.
Fig. 9
Fig. 9
Elimination of the Text Encoder and Image Priors.
Fig. 10
Fig. 10
Localized lung opacity induction in healthy scans.

Similar articles

Cited by

References

    1. Holzinger, A., Langs, G., Denk, H., Zatloukal, K. & Müller, H. Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery9, e1312 (2019). - PMC - PubMed
    1. Baumgartner, C. F., Koch, L. M., Tezcan, K. C., Ang, J. X. & Konukoglu, E. Visual feature attribution using wasserstein gans. In Proceedings of the IEEE conference on computer vision and pattern recognition, 8309–8319 (2018).
    1. Vellido, A., Martín-Guerrero, J. D. & Lisboa, P. J. Making machine learning models interpretable. In ESANN, vol. 12, 163–172 (Citeseer, 2012).
    1. Zhu, W., Lou, Q., Vang, Y. S. & Xie, X. Deep multi-instance networks with sparse label assignment for whole mammogram classification. In International conference on medical image computing and computer-assisted intervention, 603–611 (Springer, 2017).
    1. Ge, Z., Demyanov, S., Chakravorty, R., Bowling, A. & Garnavi, R. Skin disease recognition using deep saliency features and multimodal learning of dermoscopy and clinical images. In Medical Image Computing and Computer Assisted Intervention- MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, September 11-13, 2017, Proceedings, Part III 20, 250–258 (Springer, 2017).

Publication types

LinkOut - more resources