Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Nov-Dec;13(6):335-344.
doi: 10.1097/eus.0000000000000094. Epub 2024 Dec 12.

Deep learning segmentation architectures for automatic detection of pancreatic ductal adenocarcinoma in EUS-guided fine-needle biopsy samples based on whole-slide imaging

Affiliations

Deep learning segmentation architectures for automatic detection of pancreatic ductal adenocarcinoma in EUS-guided fine-needle biopsy samples based on whole-slide imaging

Anca Loredana Udriștoiu et al. Endosc Ultrasound. 2024 Nov-Dec.

Abstract

Background: EUS-guided fine-needle biopsy is the procedure of choice for the diagnosis of pancreatic ductal adenocarcinoma (PDAC). Nevertheless, the samples obtained are small and require expertise in pathology, whereas the diagnosis is difficult in view of the scarcity of malignant cells and the important desmoplastic reaction of these tumors. With the help of artificial intelligence, the deep learning architectures produce a fast, accurate, and automated approach for PDAC image segmentation based on whole-slide imaging. Given the effectiveness of U-Net in semantic segmentation, numerous variants and improvements have emerged, specifically for whole-slide imaging segmentation.

Methods: In this study, a comparison of 7 U-Net architecture variants was performed on 2 different datasets of EUS-guided fine-needle biopsy samples from 2 medical centers (31 and 33 whole-slide images, respectively) with different parameters and acquisition tools. The U-Net architecture variants evaluated included some that had not been previously explored for PDAC whole-slide image segmentation. The evaluation of their performance involved calculating accuracy through the mean Dice coefficient and mean intersection over union (IoU).

Results: The highest segmentation accuracies were obtained using Inception U-Net architecture for both datasets. PDAC tissue was segmented with the overall average Dice coefficient of 97.82% and IoU of 0.87 for Dataset 1, respectively, overall average Dice coefficient of 95.70%, and IoU of 0.79 for Dataset 2. Also, we considered the external testing of the trained segmentation models by performing the cross evaluations between the 2 datasets. The Inception U-Net model trained on Train Dataset 1 performed with the overall average Dice coefficient of 93.12% and IoU of 0.74 on Test Dataset 2. The Inception U-Net model trained on Train Dataset 2 performed with the overall average Dice coefficient of 92.09% and IoU of 0.81 on Test Dataset 1.

Conclusions: The findings of this study demonstrated the feasibility of utilizing artificial intelligence for assessing PDAC segmentation in whole-slide imaging, supported by promising scores.

Keywords: Artificial intelligence; Deep learning; EUS-guided fine-needle biopsy; Pancreatic ductal adenocarcinoma; Whole-slide imaging.

PubMed Disclaimer

Conflict of interest statement

Adrian Săftoiu is an Associate Editor of the journal. The article was subjected to the standard procedures of the journal, with a review process independent of the editor and his research group.

Figures

Figure 1
Figure 1
The steps for semantic segmentation of PDAC WSIs. PDAC, pancreatic ductal adenocarcinoma; WSI: Whole-slide imaging.
Figure 2
Figure 2
The process of WSI annotation and extraction of ROIs. A, The WSI annotated by pathologist. B, An example of an ROI manually extracted from the annotated WSI. C, The mask generation for the extracted ROI. ROI: Region of interest; WSI: Whole-slide imaging.
Figure 3
Figure 3
Segmentation results for an image from Test Dataset 1 using the Inception U-Net model trained on Dataset 1: (A) ground truth image; (B) the original contoured image; (C) ground truth mask; (D) predicted segmentation mask; (E) the predicted segmentation mask superimposed over the original image; (F) differences between the ground truth mask and the predicted mask.
Figure 4
Figure 4
Segmentation results for an image from Test Dataset 2 using the Inception U-Net model trained on Dataset 2: (A) ground truth image; (B) the original contoured image; (C) ground truth mask; (D) predicted mask; (E) predicted mask superimposed over the original image; (F) differences between ground truth mask and predicted mask.
Figure 5
Figure 5
Segmentation results for an image from Test Dataset 1 using the Vanilla U-Net model trained on Dataset 1: (A) ground truth image; (B) the original contoured image; (C) ground truth mask; (D) predicted segmentation mask; (E) the predicted segmentation mask superimposed over the original image; (F) differences between the ground truth mask and the predicted mask.
Figure 6
Figure 6
Segmentation results for an image from Test Dataset 2 using the Vanilla U-Net model trained on Dataset 2: (A) ground truth image; (B) the original contoured image; (C) ground truth mask; (D) predicted segmentation mask; (E) the predicted segmentation mask superimposed over the original image; (F) differences between the ground truth mask and the predicted mask.
Figure 7
Figure 7
Segmentation results for an image from Test Dataset 2 using the Inception U-Net model trained on Dataset 1: (A) ground truth image; (B) the original contoured image; (C) ground truth mask; (D) predicted segmentation mask; (E) the predicted segmentation mask superimposed over the original image; (F) differences between the ground truth mask and the predicted mask.
Figure 8
Figure 8
Segmentation results for an image from Test Dataset 1 using the Inception U-Net model trained on Dataset 2: (A) ground truth image; (B) the original contoured image; (C) ground truth mask; (D) predicted segmentation mask; (E) the predicted segmentation mask superimposed over the original image; (F) differences between the ground truth mask and the predicted mask.

Similar articles

References

    1. Siegel RL, Miller KD, Wagle NS, Jemal A. Cancer statistics, 2023. CA Cancer J Clin 2023;73(1):17–48. - PubMed
    1. Andresson R, Vagianos CE, Williamson RCN. Preoperative staging and evaluation of resectability in pancreatic ductal adenocarcinoma. HPB (Oxford) 2004;6(1):5–12. - PMC - PubMed
    1. Park W, Chawla A, O’Reilly EM. Pancreatic cancer: a review. JAMA 2021;326(9):851–862. - PMC - PubMed
    1. Kitano M, Yoshida T, Itonaga M, Tamura T, Hatamaru K, Yamashita Y. Impact of endoscopic ultrasonography on diagnosis of pancreatic cancer. J Gastroenterol 2019;54(1):19–32. - PMC - PubMed
    1. Cazacu IM, Udristoiu A, Gruionu LG, Iacob A, Gruionu G, Saftoiu A. Artificial intelligence in pancreatic cancer: toward precision diagnosis. Endosc Ultrasound 2019;8(6):357–359. - PMC - PubMed