Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Feb 22;25(3):e14296.
doi: 10.1002/acm2.14296. Online ahead of print.

Semi-supervised auto-segmentation method for pelvic organ-at-risk in magnetic resonance images based on deep-learning

Affiliations

Semi-supervised auto-segmentation method for pelvic organ-at-risk in magnetic resonance images based on deep-learning

Xianan Li et al. J Appl Clin Med Phys. .

Abstract

Background and purpose: In radiotherapy, magnetic resonance (MR) imaging has higher contrast for soft tissues compared to computed tomography (CT) scanning and does not emit radiation. However, manual annotation of the deep learning-based automatic organ-at-risk (OAR) delineation algorithms is expensive, making the collection of large-high-quality annotated datasets a challenge. Therefore, we proposed the low-cost semi-supervised OAR segmentation method using small pelvic MR image annotations.

Methods: We trained a deep learning-based segmentation model using 116 sets of MR images from 116 patients. The bladder, femoral heads, rectum, and small intestine were selected as OAR regions. To generate the training set, we utilized a semi-supervised method and ensemble learning techniques. Additionally, we employed a post-processing algorithm to correct the self-annotation data. Both 2D and 3D auto-segmentation networks were evaluated for their performance. Furthermore, we evaluated the performance of semi-supervised method for 50 labeled data and only 10 labeled data.

Results: The Dice similarity coefficient (DSC) of the bladder, femoral heads, rectum and small intestine between segmentation results and reference masks is 0.954, 0.984, 0.908, 0.852 only using self-annotation and post-processing methods of 2D segmentation model. The DSC of corresponding OARs is 0.871, 0.975, 0.975, 0.783, 0.724 using 3D segmentation network, 0.896, 0.984, 0.890, 0.828 using 2D segmentation network and common supervised method.

Conclusion: The outcomes of our study demonstrate that it is possible to train a multi-OAR segmentation model using small annotation samples and additional unlabeled data. To effectively annotate the dataset, ensemble learning and post-processing methods were employed. Additionally, when dealing with anisotropy and limited sample sizes, the 2D model outperformed the 3D model in terms of performance.

Keywords: auto-segmentation; deep-learning; semi-supervised learning.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflicts of interest.

Figures

FIGURE 1
FIGURE 1
The OAR contours in MR image. Yellow: small intestine, green: bladder, blue: rectum, indigo: femoral head right and brown: femoral head left.
FIGURE 2
FIGURE 2
Datasets used in this manuscript. Fifty cases with labels were used for semi‐supervised self‐annotation and supervised learning training. The other cases without labels were used for semi‐supervised annotation and then revised by the doctor.
FIGURE 3
FIGURE 3
The architectures of the segmentation network: 3D U‐Net and 2D U‐Net.
FIGURE 4
FIGURE 4
Semi‐supervised learning process including two steps. Step 1: annotation model training. Step 2: Self annotation data generation.
FIGURE 5
FIGURE 5
Self annotation data with post‐processing algorithms. Red: bladder, green: femoral heads, blue: rectum. After post‐processing, some mistakes like cavities, and small fragments were removed.
FIGURE 6
FIGURE 6
The DSC of 3D U‐Net and 2D U‐Net for the different sample sizes of the training set, (a) bladder, (b) femoral heads, (c) rectum, (d) small intestine.
FIGURE 7
FIGURE 7
The DSC of 3D U‐Net and 2D U‐Net of five different OARs with semi‐supervised learning method, (a) bladder, (b) femoral heads, (c) rectum, (d) small intestine.
FIGURE 8
FIGURE 8
Accuracy scores of different models by two human experts.
FIGURE 9
FIGURE 9
The accuracy score, inference time, and parameters of 2D and 3D model.

Similar articles

Cited by

References

    1. Gou S, Tong N, Qi XS, Yang S, Chin R, Sheng K. Self‐channel‐and‐spatial‐attention neural network for automated multi‐organ segmentation on head and neck CT images. Phys Med Biol. 2020;65:245034. doi:10.1088/1361-6560/ab79c3 - DOI - PubMed
    1. Yang S‐D, Zhao Yu, Zhang F, et al. An efficient two‐step multi‐organ registration on abdominal CT via deep‐learning based segmentation. Biomed Signal Process Control. 2021;70:103027. doi:10.1016/j.bspc.2021.103027 - DOI
    1. Ecabert O, Peters J, Schramm H, et al. Automatic model‐based segmentation of the heart in CT images. IEEE Trans Med Imaging. 2008;27:1189‐1201. doi:10.1109/TMI.2008.918330 - DOI - PubMed
    1. Yeung M, Sala E, Schönlieb C‐B, Rundo L. Focus U‐Net: a novel dual attention‐gated CNN for polyp segmentation during colonoscopy. Comput Biol Med. 2021;137:104815. doi:10.1016/j.compbiomed.2021.104815 - DOI - PMC - PubMed
    1. Heller N, Isensee F, Maier‐Hein K, et al. The state of the art in kidney and kidney tumor segmentation in contrast‐enhanced CT imaging: results of the KiTS19 challenge. Med Image Anal. 2020;67:101821. doi:10.1016/j.media.2020.101821 - DOI - PMC - PubMed