Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Nov 16;13(1):20098.
doi: 10.1038/s41598-023-46580-4.

MANet: a multi-attention network for automatic liver tumor segmentation in computed tomography (CT) imaging

Affiliations

MANet: a multi-attention network for automatic liver tumor segmentation in computed tomography (CT) imaging

Kasun Hettihewa et al. Sci Rep. .

Abstract

Automatic liver tumor segmentation is a paramount important application for liver tumor diagnosis and treatment planning. However, it has become a highly challenging task due to the heterogeneity of the tumor shape and intensity variation. Automatic liver tumor segmentation is capable to establish the diagnostic standard to provide relevant radiological information to all levels of expertise. Recently, deep convolutional neural networks have demonstrated superiority in feature extraction and learning in medical image segmentation. However, multi-layer dense feature stacks make the model quite inconsistent in imitating visual attention and awareness of radiological expertise for tumor recognition and segmentation task. To bridge that visual attention capability, attention mechanisms have developed for better feature selection. In this paper, we propose a novel network named Multi Attention Network (MANet) as a fusion of attention mechanisms to learn highlighting important features while suppressing irrelevant features for the tumor segmentation task. The proposed deep learning network has followed U-Net as the basic architecture. Moreover, residual mechanism is implemented in the encoder. Convolutional block attention module has split into channel attention and spatial attention modules to implement in encoder and decoder of the proposed architecture. The attention mechanism in Attention U-Net is integrated to extract low-level features to combine with high-level ones. The developed deep learning architecture is trained and evaluated on the publicly available MICCAI 2017 Liver Tumor Segmentation dataset and 3DIRCADb dataset under various evaluation metrics. MANet demonstrated promising results compared to state-of-the-art methods with comparatively small parameter overhead.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Figure 1
Figure 1
Block diagram of the proposed MANet network architecture.
Figure 2
Figure 2
Schematic diagram of Skip Connection Attention Gate (SCAG).
Figure 3
Figure 3
Schematic diagram of Channel Attention (CA).
Figure 4
Figure 4
Schematic diagram of spatial attention (SA).
Figure 5
Figure 5
Schematic diagram of Convolutional Block Attention Module (CBAM).
Figure 6
Figure 6
The baseline models and proposed model evaluation of Dice score during the 80 epochs of training on test set. (a) Volume-based segmentation performance. (b) Slice-based segmentation performance.
Figure 7
Figure 7
Qualitative analysis of sample segmentation generated by comparison models from the slice-based segmentation experiment. The contour image of the segmentation is illustrated right below the binary segmentation mask. From left to right: the original CT image, results obtained by UNet (pink), Attention UNet (orange), UNet+Resnet18 (green), UNet+CBAM (cyan), MANet (blue), and the corresponding ground truth mask (red). Here, we have illustrated five different samples under three perspectives, which are large tumors, small tumors, and poor segmentation respectively.
Figure 8
Figure 8
Qualitative analysis of sample segmentation generated by comparison models from the volume-based segmentation experiment. The contour image of the segmentation is illustrated right below the binary segmentation mask. From left to right: the original CT image, results obtained by UNet (pink), Attention UNet (orange), UNet+Resnet18 (green), UNet+CBAM (cyan), MANet (blue), and the corresponding ground truth mask (red). Here, we have illustrated five different samples under three perspectives, which are large tumors, small tumors, and poor segmentation respectively.
Figure 9
Figure 9
Qualitative analysis of over/non-segmentations in multiple tumor cases generated by comparison models from the slice-based segmentation experiment. The contour image of the segmentation is illustrated right below the binary segmentation mask. From left to right: the original CT image, results obtained by UNet (pink), Attention UNet (orange), UNet+Resnet18 (green), UNet+CBAM (cyan), MANet (blue), and the corresponding ground truth mask (red). Here, we have illustrated five different samples with variable sizes of tumors in multiple tumor cases.
Figure 10
Figure 10
Qualitative analysis of sample segmentation generated by state-of-the-art models from the slice-based segmentation experiment. The contour image of the segmentation is illustrated right below the binary segmentation mask. From left to right: the original CT image, results obtained by UNet 3+ (pink), ResUNet++ (orange), SmaAt-UNet (green), TA-Net (cyan), MANet (blue), and the corresponding ground truth mask (red). Here, we have illustrated five different samples under three perspectives, which are large tumors, small tumors, and multiple tumors respectively.
Figure 11
Figure 11
Feature visualization before and after the Skip Connection Attention Gate (SCAG), Channel Attention (CA), Spatial Attention (SA), and Convolutional Block Attention Module (CBAM) used in MANet architecture design.
Figure 12
Figure 12
Visualization of corresponding feature maps of comparison networks.

References

    1. Siegel RL, Miller KD, Jemal A. Cancer statistics, 2019. CA Cancer J. Clin. 2019;69:7–34. doi: 10.3322/caac.21551. - DOI - PubMed
    1. Siegel RL, Miller KD, Jemal A. Cancer statistics 2020. CA Cancer J. Clin. 2020;70:7–30. doi: 10.3322/caac.21590. - DOI - PubMed
    1. Long, J., Shelhamer, E. & Darrell, T. Fully convolutional networks for semantic segmentation. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 3431–3440 10.1109/CVPR.2015.7298965 (2015). - PubMed
    1. Ronneberger, O., Fischer, P. & Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 (eds Navab, N., Hornegger, J., Wells, W. M. & Frangi, A. F.) 234–241 (Springer International Publishing, Cham, 2015).
    1. Li X, et al. H-denseunet: Hybrid densely connected unet for liver and tumor segmentation from ct volumes. IEEE Trans. Med. Imaging. 2018;37:2663–2674. doi: 10.1109/TMI.2018.2845918. - DOI - PubMed

Publication types