Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Sep 14;10(18):e37804.
doi: 10.1016/j.heliyon.2024.e37804. eCollection 2024 Sep 30.

Dual vision Transformer-DSUNET with feature fusion for brain tumor segmentation

Affiliations

Dual vision Transformer-DSUNET with feature fusion for brain tumor segmentation

Mohammed Zakariah et al. Heliyon. .

Abstract

Brain tumors are one of the leading causes of cancer death; screening early is the best strategy to diagnose and treat brain tumors. Magnetic Resonance Imaging (MRI) is extensively utilized for brain tumor diagnosis; nevertheless, achieving improved accuracy and performance, a critical challenge in most of the previously reported automated medical diagnostics, is a complex problem. The study introduces the Dual Vision Transformer-DSUNET model, which incorporates feature fusion techniques to provide precise and efficient differentiation between brain tumors and other brain regions by leveraging multi-modal MRI data. The impetus for this study arises from the necessity of automating the segmentation process of brain tumors in medical imaging, a critical component in the realms of diagnosis and therapy strategy. The BRATS 2020 dataset is employed to tackle this issue, an extensively utilized dataset for segmenting brain tumors. This dataset encompasses multi-modal MRI images, including T1-weighted, T2-weighted, T1Gd (contrast-enhanced), and FLAIR modalities. The proposed model incorporates the dual vision idea to comprehensively capture the heterogeneous properties of brain tumors across several imaging modalities. Moreover, feature fusion techniques are implemented to augment the amalgamation of data originating from several modalities, enhancing the accuracy and dependability of tumor segmentation. The Dual Vision Transformer-DSUNET model's performance is evaluated using the Dice Coefficient as a prevalent metric for quantifying segmentation accuracy. The results obtained from the experiment exhibit remarkable performance, with Dice Coefficient values of 91.47 % for enhanced tumors, 92.38 % for core tumors, and 90.88 % for edema. The cumulative Dice score for the entirety of the classes is 91.29 %. In addition, the model has a high level of accuracy, roughly 99.93 %, which underscores its durability and efficacy in segmenting brain tumors. Experimental findings demonstrate the integrity of the suggested architecture, which has quickly improved the detection accuracy of many brain diseases.

Keywords: Brain tumor segmentation; Brats dataset; Dice coefficient; Dual vision transformer; Feature fusion.

PubMed Disclaimer

Conflict of interest statement

The authors declare there is no conflict of interest.

Figures

Fig. 1
Fig. 1
Framework for the dual vision Transformer-DSUNET with feature fusion for brain tumor segmentation.
Fig. 2
Fig. 2
Region of interest overlay for Fair, t1, t2, t1ce, and Mask Modality.
Fig. 3
Fig. 3
Brain tumor slices for Each Modality.
Fig. 4
Fig. 4
Histogram of Intensity Distribution of Flair, t1, t2, t1ce, and Maks Modalities.
Fig. 5
Fig. 5
Heatmap of Flair, t1, t2, t1ce and Mask Modality.
Fig. 6
Fig. 6
Contour Plot of Flair, t1, t2, tice and Mask Modality.
Fig. 7
Fig. 7
Comparative modalities of two patients.
Fig. 8
Fig. 8
DSUNET architecture layers Distribution.
Fig. 9
Fig. 9
DVIT-DSUNT layers blocks distribution.
Fig. 10
Fig. 10
Channel attention in DVIT.
Fig. 11
Fig. 11
Dvit dual attention block Architecture layers distribution.
Fig. 12
Fig. 12
Spatial attention for DVIT dual attention.
Fig. 13
Fig. 13
Feature fusion operation for DVIT-DSUNET.
Fig. 14
Fig. 14
Proposed model layers distribution.
Fig. 15
Fig. 15
Train Test validation split.
Fig. 16
Fig. 16
Accuracy and loss performances.
Fig. 17
Fig. 17
Prediction results for different modalities of masks.
Fig. 18
Fig. 18
Predictions on Test images Flair of Different tumor types.
Fig. 19
Fig. 19
Brats21 Different Modalities samples images.
Fig. 20
Fig. 20
Brats21 Predicted classes of Sample image 1.
Fig. 21
Fig. 21
Brats21 Predicted image of sample 2.
Fig. 22
Fig. 22
Future directions in brain tumor detection.
Fig. 23
Fig. 23
Core contributions of our study.

References

    1. Srikanth M.V., Prasad V.V.K.D.V., Satya Prasad K. Brain tumor detection through modified optimization algorithm by region-based image fusion. ECTI Transactions on Computer and Information Technology (ECTI-CIT) Mar. 2023;17(1):117–127. doi: 10.37936/ecti-cit.2023171.249604. - DOI
    1. Preethi S., Aishwarya P. An efficient wavelet-based image fusion for brain tumor detection and segmentation over PET and MRI image. Multimed Tools Appl. 2021;80(10):14789–14806. doi: 10.1007/s11042-021-10538-3. - DOI
    1. Fang L., Wang X. Brain tumor segmentation based on the dual-path network of multi-modal MRI images. Pattern Recognit. Apr. 2022;124 doi: 10.1016/j.patcog.2021.108434. - DOI
    1. Hamdaoui F., Sakly A. Automatic diagnostic system for segmentation of 3D/2D brain MRI images based on a hardware architecture. Microprocess. Microsyst. Apr. 2023;98 doi: 10.1016/j.micpro.2023.104814. - DOI
    1. Shen T., Xu H. Medical image segmentation based on transformer and HarDNet structures. IEEE Access. 2023;11:16621–16630. doi: 10.1109/ACCESS.2023.3244197. - DOI

LinkOut - more resources