Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Multicenter Study
. 2024 Nov;10(6):e70006.
doi: 10.1002/2056-4538.70006.

Assessing the impact of deep-learning assistance on the histopathological diagnosis of serous tubal intraepithelial carcinoma (STIC) in fallopian tubes

Collaborators, Affiliations
Multicenter Study

Assessing the impact of deep-learning assistance on the histopathological diagnosis of serous tubal intraepithelial carcinoma (STIC) in fallopian tubes

Joep Ma Bogaerts et al. J Pathol Clin Res. 2024 Nov.

Abstract

In recent years, it has become clear that artificial intelligence (AI) models can achieve high accuracy in specific pathology-related tasks. An example is our deep-learning model, designed to automatically detect serous tubal intraepithelial carcinoma (STIC), the precursor lesion to high-grade serous ovarian carcinoma, found in the fallopian tube. However, the standalone performance of a model is insufficient to determine its value in the diagnostic setting. To evaluate the impact of the use of this model on pathologists' performance, we set up a fully crossed multireader, multicase study, in which 26 participants, from 11 countries, reviewed 100 digitalized H&E-stained slides of fallopian tubes (30 cases/70 controls) with and without AI assistance, with a washout period between the sessions. We evaluated the effect of the deep-learning model on accuracy, slide review time and (subjectively perceived) diagnostic certainty, using mixed-models analysis. With AI assistance, we found a significant increase in accuracy (p < 0.01) whereby the average sensitivity increased from 82% to 93%. Further, there was a significant 44 s (32%) reduction in slide review time (p < 0.01). The level of certainty that the participants felt versus their own assessment also significantly increased, by 0.24 on a 10-point scale (p < 0.01). In conclusion, we found that, in a diverse group of pathologists and pathology residents, AI support resulted in a significant improvement in the accuracy of STIC diagnosis and was coupled with a substantial reduction in slide review time. This model has the potential to provide meaningful support to pathologists in the diagnosis of STIC, ultimately streamlining and optimizing the overall diagnostic process.

Keywords: STIC; artificial intelligence; computational pathology; deep learning; high‐grade serous carcinoma; histopathology; serous tubal intraepithelial carcinoma.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Schematic display of the study design. Readers were randomly assigned to an order. Images in both orders were the same and were displayed in the same sequence but were reviewed either with or without AI assistance. Each square represents a set of images, with the number indicating the number of images in that set. The color indicates whether AI assistance for that set was available to the reader.
Figure 2
Figure 2
AI assistance was visualized as a black bounding box, surrounding the area(s) the algorithm had detected as aberrant (A1, B1, C1, and D1). The AI model output can also be visualized as a color‐coded heatmap (A2, B2, C2, and D2), whereby absence of color indicates that the model predicts no presence of STIC/STIL, green indicates a low probability and red indicates a high probability of STIC/STIL being present. Readers of the study did not see these heatmaps. They are presented here to show what areas the AI models base its predictions on. In the reference standard, lesions A and B had been diagnosed as STIC; lesions C and D had been diagnosed as STIL. During the study, readers only had to indicate whether they thought STIC or STIL was present. They did not have to make a classifying diagnosis.
Figure 3
Figure 3
Results from descriptive statistics. (A) Boxplot displaying the distribution of readers' average sensitivity and specificity for both assisted and unassisted reads. (B) Boxplot displaying the distribution of readers' average slide review time for assisted and unassisted reads. (C) Boxplot displaying the distribution of readers' sense of certainty (on a 1–10 scale) for both assisted and unassisted reads. (D) Bar chart displaying the requested immunohistochemistry in absolute numbers, under assisted and unassisted reads.
Figure 4
Figure 4
(A1) ×40 magnification of an H&E‐stained slide, showing STIC with overexpression in the corresponding p53 stain (A2) and increased proliferative activity shown with Ki‐67 (A3). (B1) ×40 magnification of an H&E‐stained slide, showing STIL, with a (dubious) null pattern in the p53 stain (B2) and increased proliferative activity compared to surrounding epithelium, but <10% (B3). (C1) ×40 magnification of an H&E‐stained slide, showing STIL, with overexpression of p53 (C2) and a slight increase in proliferative activity with Ki‐67 positivity <10% (C3).
Figure 5
Figure 5
Results of the user satisfaction survey. Readers were asked to rate their level of agreement on a 5‐point scale, ranging from ‘strongly disagree’ to ‘strongly agree’. Numbers in the colored segments reflect the percentage of votes.

References

    1. Abels E, Pantanowitz L, Aeffner F, et al. Computational pathology definitions, best practices, and recommendations for regulatory guidance: a white paper from the digital pathology association. J Pathol 2019; 249: 286–294. - PMC - PubMed
    1. Niazi MKK, Parwani AV, Gurcan MN. Digital pathology and artificial intelligence. Lancet Oncol 2019; 20: e253–e261. - PMC - PubMed
    1. Pallua JD, Brunner A, Zelger B, et al. The future of pathology is digital. Pathol Res Pract 2020; 216: 153040. - PubMed
    1. Jahn SW, Plass M, Moinfar F. Digital pathology: advantages, limitations and emerging perspectives. J Clin Med 2020; 9: 1–17. - PMC - PubMed
    1. Madabhushi A, Lee G. Image analysis and machine learning in digital pathology: challenges and opportunities. Med Image Anal 2016; 33: 170–175. - PMC - PubMed

Publication types

MeSH terms