Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Oct 18;14(1):24427.
doi: 10.1038/s41598-024-75886-0.

Simulating clinical features on chest radiographs for medical image exploration and CNN explainability using a style-based generative adversarial autoencoder

Affiliations

Simulating clinical features on chest radiographs for medical image exploration and CNN explainability using a style-based generative adversarial autoencoder

Kyle A Hasenstab et al. Sci Rep. .

Abstract

Explainability of convolutional neural networks (CNNs) is integral for their adoption into radiological practice. Commonly used attribution methods localize image areas important for CNN prediction but do not characterize relevant imaging features underlying these areas, acting as a barrier to the adoption of CNNs for clinical use. We therefore propose Semantic Exploration and Explainability using a Style-based Generative Adversarial Autoencoder Network (SEE-GAAN), an explainability framework that uses latent space manipulation to generate a sequence of synthetic images that semantically visualizes how clinical and CNN features manifest within medical images. Visual analysis of changes in these sequences then facilitates the interpretation of features, thereby improving explainability. SEE-GAAN was first developed on a cohort of 26,664 chest radiographs across 15,409 patients from our institution. SEE-GAAN sequences were then generated across several clinical features and CNN predictions of NT-pro B-type natriuretic peptide (BNPP) as a proxy for acute heart failure. Radiological interpretations indicated SEE-GAAN sequences captured relevant changes in anatomical and pathological morphology associated with clinical and CNN predictions and clarified ambiguous areas highlighted by commonly used attribution methods. Our study demonstrates SEE-GAAN can facilitate our understanding of clinical features for imaging biomarker exploration and improve CNN transparency over commonly used explainability methods.

Keywords: Autoencoder; Chest radiographs; Convolutional neural network; Explainable artificial intelligence; Generative adversarial network.

PubMed Disclaimer

Conflict of interest statement

Dr. Hasenstab, Dr. Hahn, and Mr. Chao declare no potential conflicts of interest. Dr. Hsiao receives research grant support from GE healthcare, Bayer AG, and Bracco. He is a consultant for Canon and was a cofounder of Arterys Inc, which has been acquired by Tempus AI.

Figures

Fig. 1
Fig. 1
Semantic Exploration and Explainability using a Generative 1 Adversarial Autoencoder Network  (SEE-GAAN), a framework for visualizing how clinical features and CNN predictions present within  medical images. (a) SEE-GAAN autoencoder designed to reconstruct an image from its latent space  representation. (b) SEE-GAAN latent space manipulation and synthetic image sequence generation using  the SEE-GAAN autoencoder. (c) SEE-GAAN sequence of synthetic images and subtractions for global  explanations of clinical and CNN features. Subtraction images visually highlight specific changes in  augmented images and facilitate interpretation.
Fig. 2
Fig. 2
SEE-GAAN local explanations of clinical and CNN features. (a) Local explanations are created by linearly shifting the latent vector formula image of an image in the direction of the opposing class (e.g., formula image  by some weighting factor formula image. (b) Varying the weighting factor formula image gradually augments the appearance of reconstructionsformula image  as the opposing class (e.g., AHF+). The result is an image sequence that visualizes how a clinical or CNN feature presents on a specific patient’s image. Subtraction images visually highlight specific changes in augmented images and facilitate interpretation.
Fig. 3
Fig. 3
Global SEE-GAAN sequences visualizing the overall 1 presentation of clinical features in chest radiographs. (a) The SEE-GAAN sequence for sex shows decreased breast soft tissue density but increased chest wall density elsewhere for males, possibly reflecting increased muscle or bone density.  (b) Age captures increases and decreases in the attenuation of the flanks and decreases in chest wall soft tissue density. (c) Patients with AHF exhibit increased size of the cardiomediastinal silhouette and  central pulmonary vasculature. (d) GE devices show increased attenuation throughout the chest wall for  GE devices.
Fig. 4
Fig. 4
Reconstruction and local SEE-GAAN sequences for a healthy 1 27-year-old female across several  clinical features. (a) The reconstruction is largely identical to the native image with exception to minor differences in imaging features due to the 512-dimensional latent space compression. (b)–(d) Local  sequences emphasize the same imaging features as the global sequences, except visualize the  presentation of these features on a specific patient’s radiograph
Fig. 5
Fig. 5
Comparison of global SEE-GAAN sequences for (a) ground-1 truth BNPP and (b) BNPP-CNN 2 predictions formula image for CNN explainability. Increases in BNPP are associated with increased size of the  cardiomediastinal silhouette and pulmonary vasculature and decreased chest wall density. The SEE GAAN sequence for formula image  suggests the BNPP-CNN heavily focuses on the size of the cardiomediastinal  silhouette and chest wall soft tissue density to make its predictions, with somewhat less emphasis on  vascularity.
Fig. 6
Fig. 6
Local SEE-GAAN sequences for CNN explainability and troubleshooting. (a) Sequence for a 53  year-old male correctly classified as having elevated BNPP (>400). (b) Sequence for a 51 year-old male correctly classified as not having elevated BNPP (<400). Both sequences are consistent with the global  interpretations of cardiomediastinal silhouette, pulmonary vasculature, and decreased chest wall  density. Sequences further characterize the ambiguous regions highlighted by commonly used  attribution methods. (c)–(d) CNN troubleshooting using SEE-GAAN on a (c) false positive case (83 year old female) and (d) false negative case (51 year-old male). For both (c)–(d), we augment patients’ images  until the BNPP-CNN correctly predicts their ground-truth BNPP values. We observe that the BNPP-CNN  expects differences in the size of the cardiomediastinal silhouette and attenuation of the chest wall soft tissue to make correct predictions

Similar articles

Cited by

References

    1. Najjar, R. & Redefining Radiology A review of artificial intelligence integration in medical imaging. Diagnostics. 13(17), 2760. 10.3390/diagnostics13172760 (2023). - PMC - PubMed
    1. Hasenstab, K. Convolutional neural networks and their applications in medical imaging: a primer for mathematicians. AMS Notices. 7010.1090/noti2598 (2023).
    1. Reyes, M. et al. On the interpretability of artificial intelligence in radiology: challenges and opportunities. Radiol. Artif. Intell.2(3), e190043. 10.1148/ryai.2020190043 (2020). - PMC - PubMed
    1. de Vries, B. M. et al. Explainable artificial intelligence (XAI) in radiology and nuclear medicine: a literature review. Front. Med.10, 1180773. 10.3389/fmed.2023.1180773 (2023). - PMC - PubMed
    1. Borys, K. et al. Explainable AI in medical imaging: an overview for clinical practitioners – beyond saliency-based XAI approaches. Eur. J. Radiol.162, 110786. 10.1016/j.ejrad.2023.110786 (2023). - PubMed

Substances