Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 May;35(5):101553.
doi: 10.1016/j.jksuci.2023.04.006. Epub 2023 Apr 19.

DSGA-Net: Deeply separable gated transformer and attention strategy for medical image segmentation network

Affiliations

DSGA-Net: Deeply separable gated transformer and attention strategy for medical image segmentation network

Junding Sun et al. J King Saud Univ Comput Inf Sci. 2023 May.

Abstract

To address the problems of under-segmentation and over-segmentation of small organs in medical image segmentation. We present a novel medical image segmentation network model with Depth Separable Gating Transformer and a Three-branch Attention module (DSGA-Net). Firstly, the model adds a Depth Separable Gated Visual Transformer (DSG-ViT) module into its Encoder to enhance (i) the contextual links among global, local, and channels and (ii) the sensitivity to location information. Secondly, a Mixed Three-branch Attention (MTA) module is proposed to increase the number of features in the up-sampling process. Meanwhile, the loss of feature information is reduced when restoring the feature image to the original image size. By validating Synapse, BraTs2020, and ACDC public datasets, the Dice Similarity Coefficient (DSC) of the results of DSGA-Net reached 81.24%,85.82%, and 91.34%, respectively. Moreover, the Hausdorff Score (HD) decreased to 20.91% and 5.27% on the Synapse and BraTs2020. There are 10.78% and 0.69% decreases compared to the Baseline TransUNet. The experimental results indicate that DSGA-Net achieves better segmentation than most advanced methods.

Keywords: Depth separable; Gated attention mechanism; Medical image segmentation; Transformer.

PubMed Disclaimer

Conflict of interest statement

Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Figures

Fig. 1
Fig. 1. Spatial feature pyramid attention mechanism module.
Fig. 2
Fig. 2. DSG-ViT.
Fig. 3
Fig. 3. MTA Block.
Fig. 4
Fig. 4. Structure of the proposed DSGA-Net.
Fig. 5
Fig. 5. Structure of the Encoder.
Fig. 6
Fig. 6. Comparison of DSGA-Net modules segmentation.
Fig. 7
Fig. 7. Study of the number of skip connections added to DSGA-Net.
Fig. 8
Fig. 8. Segmentation results of different CNN-based models on Synapse Dataset.
Fig. 9
Fig. 9. Segmentation Results of Different Variants of CNN and ViT on Synapse Dataset.

Similar articles

Cited by

References

    1. Bitter C, Elizondo DA, Yang Y. Natural language processing: a prolog perspective. Artif Intell Rev. 2010;33(1-2):151.
    1. Cao H, Wang Y, Chen J, Jiang D, Zhang X, Tian Q, Wang M. Swin-unet: Unet-like pure transformer for medical image segmentation; Computer Vision–ECCV 2022 Workshops: Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part III; Cham. 2023. Feb, pp. 205–218.
    1. Chen J, Lu Y, Yu Q, Luo X, Adeli E, Wang Y, et al. Zhou Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv preprint. 2021:arXiv:2102.04306
    1. Chen B, Liu Y, Zhang Z, Lu G, Kong AWK. Transattunet: Multi-level attention-guided u-net with transformer for medical image segmentation. arXiv preprint. 2021:arXiv:2107.05274
    1. Cheng Z, Qu A, He X. Contour-aware semantic segmentation network with spatial attention mechanism for medical image. Vis Comput. 2022;38(3):749–762. - PMC - PubMed

LinkOut - more resources