Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Aug:128:108669.
doi: 10.1016/j.patcog.2022.108669. Epub 2022 Apr 1.

Super U-Net: a modularized generalizable architecture

Affiliations

Super U-Net: a modularized generalizable architecture

Cameron Beeche et al. Pattern Recognit. 2022 Aug.

Abstract

Objective: To develop and validate a novel convolutional neural network (CNN) termed "Super U-Net" for medical image segmentation.

Methods: Super U-Net integrates a dynamic receptive field module and a fusion upsampling module into the classical U-Net architecture. The model was developed and tested to segment retinal vessels, gastrointestinal (GI) polyps, skin lesions on several image types (i.e., fundus images, endoscopic images, dermoscopic images). We also trained and tested the traditional U-Net architecture, seven U-Net variants, and two non-U-Net segmentation architectures. K-fold cross-validation was used to evaluate performance. The performance metrics included Dice similarity coefficient (DSC), accuracy, positive predictive value (PPV), and sensitivity.

Results: Super U-Net achieved average DSCs of 0.808±0.0210, 0.752±0.019, 0.804±0.239, and 0.877±0.135 for segmenting retinal vessels, pediatric retinal vessels, GI polyps, and skin lesions, respectively. The Super U-net consistently outperformed U-Net, seven U-Net variants, and two non-U-Net segmentation architectures (p < 0.05).

Conclusion: Dynamic receptive fields and fusion upsampling can significantly improve image segmentation performance.

Keywords: U-Net; dynamic receptive field; fusion upsampling; image segmentation.

PubMed Disclaimer

Conflict of interest statement

Declaration of Interests The authors have no conflicts of interest to declare.

Figures

Fig. 1.
Fig. 1.
Super U-Net Architecture
Fig. 2.
Fig. 2.
Fusion upsampling and concatenation module
Fig. 3.
Fig. 3.
Dynamic Receptive Field module
Fig 4.
Fig 4.
Retinal vessel segmentation: Original image (A), Manual segmentation (B), U-Net (C), Res U-Net (D), Attention U-Net (E), U-Net++ (F), Attn. Res U-Net (G), R2 U-Net (H), Inception U-Net (I), Res U-Net++ (J), LinkNet (K), Super U-Net (L)
Fig 5.
Fig 5.
Retinal vessel segmentation: Original image (A), Manual segmentation (B), U-Net (C), Res U-Net (D), Attention U-Net (E), U-Net++ (F), Attn. Res U-Net (G), R2 U-Net (H), Inception U-Net (I), Res U-Net++ (J), LinkNet (K), Super U-Net (L)
Fig 6.
Fig 6.
Retinal vessel segmentation results for 8 validation images generated by Super U-Net (outlined in blue) compared to the manual outline (outlined in green) on the CHASE DB1 dataset when trained on a 20/8 train/test split.
Fig 7.
Fig 7.
GI polyp segmentation results: Original image (A), Manual segmentation (B), U-Net (C), Res U-Net (D), LinkNet (E), U-Net++ (F), Attn. Res U-Net (G), R2 U-Net (H), Inception U-Net (I), Res U-Net++ (J), SegNet (K) Super U-Net (L)
Fig. 8.
Fig. 8.
GI polyp segmentation results for Super U-Net (outlined in blue) compared to manual segmentation (outlined in green).
Fig 9.
Fig 9.
Skin lesion segmentation results: Original image (A), Manual segmentation (B), U-Net (C), Res U-Net (D), Attention U-Net (E), U-Net++ (F), LinkNet (G), R2 U-Net (H), Inception U-Net (I), Res U-Net++ (J), SegNet (K) Super U-Net (L)
Fig. 10.
Fig. 10.
Examples demonstrating the ability of Super U-Net in segmenting cancerous skin lesions. The computerized segmentations were outlined in green, and the manual segmentations were outlined in red.

References

    1. Lifeng Qiao YZ, Zhou Hui, “Diabetic Retinopathy Detection Using Prognosis of Microaneurysm and Early Diagnosis System for Non-Proliferative Diabetic Retinopathy Based on Deep Learning Algorithms,” IEEE Access, vol. 8, pp. 104292–104302, 2020.
    1. Kuan-Song Wang GY, Chao Xu, et al., “Accurate diagnosis of colorectal cancer based on histopathology images using artificial intelligence,” BMC Med, vol. 19, 2021. - PMC - PubMed
    1. Lennox Hoyte WY, Brubaker Linda, Fielding Julia R., Lockhard Mark E., Heilbrun Marta E., Brown Morton B., Warfield Simon K., “Segmentations of MRI Images of the Female PelvicFloor: A Study of Inter- and Intra-reader Reliability,” Journal of Magnetic Resonance Imaging, vol. 33, pp. 684–691, 2011. - PMC - PubMed
    1. Frezghi Habte SB, Shay Keren1, Doyle Timothy C,, Levin Craig S, Paik David S, “In situ study of the impact of inter- and intra-reader variability on region of interest (ROI) analysis in preclinical molecular imaging,” American journal of nuclear medicine and molecular imaging, vol. 3, pp. 175–181, 2013. - PMC - PubMed
    1. Aggarwal N. S. a. L. M., “Automated medical image segmentation techniques,” Journal of Medical Physics, vol. 35, 2010. - PMC - PubMed

LinkOut - more resources