Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Jul;48(7):3860-3877.
doi: 10.1002/mp.14903. Epub 2021 May 28.

Learning fuzzy clustering for SPECT/CT segmentation via convolutional neural networks

Affiliations

Learning fuzzy clustering for SPECT/CT segmentation via convolutional neural networks

Junyu Chen et al. Med Phys. 2021 Jul.

Abstract

Purpose: Quantitative bone single-photon emission computed tomography (QBSPECT) has the potential to provide a better quantitative assessment of bone metastasis than planar bone scintigraphy due to its ability to better quantify activity in overlapping structures. An important element of assessing the response of bone metastasis is accurate image segmentation. However, limited by the properties of QBSPECT images, the segmentation of anatomical regions-of-interests (ROIs) still relies heavily on the manual delineation by experts. This work proposes a fast and robust automated segmentation method for partitioning a QBSPECT image into lesion, bone, and background.

Methods: We present a new unsupervised segmentation loss function and its semi- and supervised variants for training a convolutional neural network (ConvNet). The loss functions were developed based on the objective function of the classical Fuzzy C-means (FCM) algorithm. The first proposed loss function can be computed within the input image itself without any ground truth labels, and is thus unsupervised; the proposed supervised loss function follows the traditional paradigm of the deep learning-based segmentation methods and leverages ground truth labels during training. The last loss function is a combination of the first and the second and includes a weighting parameter, which enables semi-supervised segmentation using deep learning neural network.

Experiments and results: We conducted a comprehensive study to compare our proposed methods with ConvNets trained using supervised, cross-entropy and Dice loss functions, and conventional clustering methods. The Dice similarity coefficient (DSC) and several other metrics were used as figures of merit as applied to the task of delineating lesion and bone in both simulated and clinical SPECT/CT images. We experimentally demonstrated that the proposed methods yielded good segmentation results on a clinical dataset even though the training was done using realistic simulated images. On simulated SPECT/CT, the proposed unsupervised model's accuracy was greater than the conventional clustering methods while reducing computation time by 200-fold. For the clinical QBSPECT/CT, the proposed semi-supervised ConvNet model, trained using simulated images, produced DSCs of 0.75 and 0.74 for lesion and bone segmentation in SPECT, and a DSC of 0.79 bone segmentation of CT images. These DSCs were larger than that for standard segmentation loss functions by > 0.4 for SPECT segmentation, and > 0.07 for CT segmentation with P-values < 0.001 from a paired t-test.

Conclusions: A ConvNet-based image segmentation method that uses novel loss functions was developed and evaluated. The method can operate in unsupervised, semi-supervised, or fully-supervised modes depending on the availability of annotated training data. The results demonstrated that the proposed method provides fast and robust lesion and bone segmentation for QBSPECT/CT. The method can potentially be applied to other medical image segmentation applications.

Keywords: convolutional neural networks; fuzzy C-means; image segmentation; nuclear medicine.

PubMed Disclaimer

Figures

Fig. 1:
Fig. 1:
Overview of the proposed method.
Fig. 2:
Fig. 2:
ConvNet architecture.
Fig. 3:
Fig. 3:
The proposed unsupervised model applied to a 2D QBSPECT image. The first image: QBSPECT image. The second image: Ground truth segmentation of bone (green) and lesion (red). The third image: QBSPECT image corrupted by Gaussian noise with σ = 0.01. The fourth image to the last image: Maximum membership classification results using different β.
Fig. 4:
Fig. 4:
Visual comparison of segmentation results obtained from the ConvNets trained using the proposed unsupervised loss function and other unsupervised segmentation methods. Left panel: Clustering results and membership images generated using ℒMS [37] (the first row), the proposed ℒRFCM (the second through fourth rows), FCM [18] (the second through last row), and RFCM [28] (the bottom row). Right panel: Comparisons between the maximum membership classification results generated using two unsupervised losses (the second and the third columns), FCM (the fourth column), and RFCM (the last column) on the Gaussian noise (σ = 0.01) corrupted images.
Fig. 5:
Fig. 5:
Visual comparisons of the output membership functions generated using ℒDSC, ℒCE, the proposed FCMlabel with different fuzzy exponent q. The red and green contours in the second column indicate truth regions for lesion and bone. The blue and yellow regions in the third column represent segmented lesion and bone.
Fig. 6:
Fig. 6:
Statistical plots of DSC and surface DSC scores for SPECT and CT segmentation using different loss functions. The top three figures plot the mean scores in DSC with standard deviations as error bars. Figures in the bottom row are boxplots of the surface DSC, where the triangles denote the means, and the center lines represent the medians.
Fig. 7:
Fig. 7:
Qualitative comparison results of the proposed semi-supervised loss and other losses on the clinical SPECT scan. The yellow arrows highlight the differences between the segmentation.
Fig. 8:
Fig. 8:
Qualitative comparison of bone segmentation results between gold-standard and ConvNets trained using different losses on the clinical CT. The yellow arrows highlight the differences between the segmentation.
Fig. 9:
Fig. 9:
Visualization of five filters from the second-to-last convolutional layer of the networks trained using un- and fully-supervised loss functions.
Fig. 10:
Fig. 10:
Left panel: A slice of CT image (top). The “gold-standard” delineation (bottom). Right panel: Magnified segmentation results. The first column denote the “gold-standard” segmentation, and the second column denotes the results obtained using the proposed method.
Fig. 11:
Fig. 11:
The impact of different α values in the semi-supervised losses on the performance of segmenting clinical SPECT (a), and CT (b).

Similar articles

Cited by

References

    1. Siegel RL, Miller KD, and Jemal A, “Cancer statistics, 2020,” CA: A Cancer Journal for Clinicians, vol. 70, no. 1, pp. 7–30, jan 2020. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.3322/caac.21590 - DOI - PubMed
    1. He B et al., “Comparison of residence time estimation methods for radioimmunotherapy dosimetry and treatment planning—monte carlo simulation studies,” IEEE Transactions on Medical Imaging, vol. 27, no. 4, pp. 521–530, 2008. - PMC - PubMed
    1. He B et al., “Evaluation of quantitative imaging methods for organ activity and residence time estimation using a population of phantoms having realistic variations in anatomy and uptake,” Medical Physics, vol. 36, no. 2, pp. 612–619, 2009. [Online]. Available: https://aapm.onlinelibrary.wiley.com/doi/abs/10.1118/1.3063156 - DOI - PMC - PubMed
    1. He B and Frey EC, “Comparison of conventional, model-based quantitative planar, and quantitative SPECT image processing methods for organ activity estimation using In-111 agents,” Physics in Medicine and Biology, vol. 51, no. 16, pp. 3967–3981, aug 2006. [Online]. Available: https://iopscience.iop.org/article/10.1088/0031-9155/51/16/006 - DOI - PubMed
    1. He B, Du Y, Song X, Segars WP, and Frey EC, “A Monte Carlo and physical phantom evaluation of quantitative In-111 SPECT,” Physics in Medicine and Biology, vol. 50, no. 17, pp. 4169–4185, sep 2005. - PubMed

MeSH terms