Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Feb 5;69(4).
doi: 10.1088/1361-6560/ad1995.

Intensive vision-guided network for radiology report generation

Affiliations

Intensive vision-guided network for radiology report generation

Fudan Zheng et al. Phys Med Biol. .

Abstract

Objective.Automatic radiology report generation is booming due to its huge application potential for the healthcare industry. However, existing computer vision and natural language processing approaches to tackle this problem are limited in two aspects. First, when extracting image features, most of them neglect multi-view reasoning in vision and model single-view structure of medical images, such as space-view or channel-view. However, clinicians rely on multi-view imaging information for comprehensive judgment in daily clinical diagnosis. Second, when generating reports, they overlook context reasoning with multi-modal information and focus on pure textual optimization utilizing retrieval-based methods. We aim to address these two issues by proposing a model that better simulates clinicians perspectives and generates more accurate reports.Approach.Given the above limitation in feature extraction, we propose a globally-intensive attention (GIA) module in the medical image encoder to simulate and integrate multi-view vision perception. GIA aims to learn three types of vision perception: depth view, space view, and pixel view. On the other hand, to address the above problem in report generation, we explore how to involve multi-modal signals to generate precisely matched reports, i.e. how to integrate previously predicted words with region-aware visual content in next word prediction. Specifically, we design a visual knowledge-guided decoder (VKGD), which can adaptively consider how much the model needs to rely on visual information and previously predicted text to assist next word prediction. Hence, our final intensive vision-guided network framework includes a GIA-guided visual encoder and the VKGD.Main results.Experiments on two commonly-used datasets IU X-RAY and MIMIC-CXR demonstrate the superior ability of our method compared with other state-of-the-art approaches.Significance.Our model explores the potential of simulating clinicians perspectives and automatically generates more accurate reports, which promotes the exploration of medical automation and intelligence.

Keywords: multimodal learning; radiology report generation; visual reasoning; x-ray images.

PubMed Disclaimer

Similar articles

Cited by

LinkOut - more resources