Large-scale multi-center CT and MRI segmentation of pancreas with deep learning
- PMID: 39541706
- PMCID: PMC11698238
- DOI: 10.1016/j.media.2024.103382
Large-scale multi-center CT and MRI segmentation of pancreas with deep learning
Abstract
Automated volumetric segmentation of the pancreas on cross-sectional imaging is needed for diagnosis and follow-up of pancreatic diseases. While CT-based pancreatic segmentation is more established, MRI-based segmentation methods are understudied, largely due to a lack of publicly available datasets, benchmarking research efforts, and domain-specific deep learning methods. In this retrospective study, we collected a large dataset (767 scans from 499 participants) of T1-weighted (T1 W) and T2-weighted (T2 W) abdominal MRI series from five centers between March 2004 and November 2022. We also collected CT scans of 1,350 patients from publicly available sources for benchmarking purposes. We introduced a new pancreas segmentation method, called PanSegNet, combining the strengths of nnUNet and a Transformer network with a new linear attention module enabling volumetric computation. We tested PanSegNet's accuracy in cross-modality (a total of 2,117 scans) and cross-center settings with Dice and Hausdorff distance (HD95) evaluation metrics. We used Cohen's kappa statistics for intra and inter-rater agreement evaluation and paired t-tests for volume and Dice comparisons, respectively. For segmentation accuracy, we achieved Dice coefficients of 88.3% (±7.2%, at case level) with CT, 85.0% (±7.9%) with T1 W MRI, and 86.3% (±6.4%) with T2 W MRI. There was a high correlation for pancreas volume prediction with R2 of 0.91, 0.84, and 0.85 for CT, T1 W, and T2 W, respectively. We found moderate inter-observer (0.624 and 0.638 for T1 W and T2 W MRI, respectively) and high intra-observer agreement scores. All MRI data is made available at https://osf.io/kysnj/. Our source code is available at https://github.com/NUBagciLab/PaNSegNet.
Keywords: CT pancreas; Generalized segmentation; MRI pancreas; Pancreas segmentation; Transformer segmentation.
Copyright © 2024 The Authors. Published by Elsevier B.V. All rights reserved.
Conflict of interest statement
Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Ulas Bagci reports financial support was provided by National Institutes of Health. If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Comment in
-
Bridging the Gap in Pancreatic Imaging: A New Era of AI-Powered Precision.JOP. 2025 Apr 30;25(Suppl 11):3-4. Epub 2025 Jan 2. JOP. 2025. PMID: 40979306 Free PMC article. No abstract available.
References
-
- Abdollahi A, Pradhan B, Alamri A, 2020. Vnet: An end-to-end fully convolutional neural network for road extraction from high-resolution remote sensing data. Ieee Access 8, 179424–179436.
-
- Bagci U, Chen X, Udupa JK, 2011. Hierarchical scale-based multiobject recognition of 3-d anatomical structures. IEEE Transactions on Medical Imaging 31, 777–789. - PubMed
-
- Cai J, Lu L, Xie Y, Xing F, Yang L, 2017. Improving deep pancreas segmentation in ct and mri images via recurrent neural contextual learning and direct loss function. arXiv preprint arXiv:1707.04912.
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Medical