Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Mar 1;65(3):133-140.
doi: 10.4103/singaporemedj.SMJ-2023-187. Epub 2024 Mar 26.

Development and validation of a deep learning system for detection of small bowel pathologies in capsule endoscopy: a pilot study in a Singapore institution

Affiliations

Development and validation of a deep learning system for detection of small bowel pathologies in capsule endoscopy: a pilot study in a Singapore institution

Bochao Jiang et al. Singapore Med J. .

Abstract

Introduction: Deep learning models can assess the quality of images and discriminate among abnormalities in small bowel capsule endoscopy (CE), reducing fatigue and the time needed for diagnosis. They serve as a decision support system, partially automating the diagnosis process by providing probability predictions for abnormalities.

Methods: We demonstrated the use of deep learning models in CE image analysis, specifically by piloting a bowel preparation model (BPM) and an abnormality detection model (ADM) to determine frame-level view quality and the presence of abnormal findings, respectively. We used convolutional neural network-based models pretrained on large-scale open-domain data to extract spatial features of CE images that were then used in a dense feed-forward neural network classifier. We then combined the open-source Kvasir-Capsule dataset (n = 43) and locally collected CE data (n = 29).

Results: Model performance was compared using averaged five-fold and two-fold cross-validation for BPMs and ADMs, respectively. The best BPM model based on a pre-trained ResNet50 architecture had an area under the receiver operating characteristic and precision-recall curves of 0.969±0.008 and 0.843±0.041, respectively. The best ADM model, also based on ResNet50, had top-1 and top-2 accuracies of 84.03±0.051 and 94.78±0.028, respectively. The models could process approximately 200-250 images per second and showed good discrimination on time-critical abnormalities such as bleeding.

Conclusion: Our pilot models showed the potential to improve time to diagnosis in CE workflows. To our knowledge, our approach is unique to the Singapore context. The value of our work can be further evaluated in a pragmatic manner that is sensitive to existing clinician workflow and resource constraints.

PubMed Disclaimer

Conflict of interest statement

There are no conflicts of interest.

Figures

Figure 1
Figure 1
Data flow diagram. Bowel preparation models (BPM) are trained on frames representing reduced mucosal view quality and otherwise. In the prediction task, abnormality detection models (ADM) used data excluding frames exhibiting polyps, stale blood, foreign bodies and anatomical findings.
Figure 2
Figure 2
The model architectures use pretrained deep convolutional neural network (CNN) models as feature extractors with outputs passed into a dense feed-forward neural network classifier top. Strategies that address overfitting during model training (e.g., dropout) are used. BPM: bowel preparation model, ADM: abnormality detection model.
Figure S1
Figure S1
An illustration of imbalance across luminal observations through a normalised count of image frames representing each class.
Figure S2
Figure S2
Overview of the data processing and model training pipeline. The local data were extracted as videos and then segmented into frames.
Figure S3
Figure S3
Sequential training strategy.
Figure S4
Figure S4
A comparison of the loss curves of the best models for each type of convolutional base used.
Figure S5
Figure S5
A confusion matrix normalised on the total number of images per class in the validation set using the best-performing ResNet50-based models trained on multi-site data.

References

    1. Goenka MK, Majumder S, Goenka U. Capsule endoscopy: Present status and future expectation. World J Gastroenterol. 2014;20:10024–37. - PMC - PubMed
    1. Tziortziotis I, Laskaratos FM, Coda S. Role of artificial intelligence in video capsule endoscopy. Diagnostics. 2021;11:1192. - PMC - PubMed
    1. Cheung DY, Lee IS, Chang DK, Kim JO, Cheon JH, Jang BI, et al. Capsule endoscopy in small bowel tumors: A multicenter Korean study. J Gastroenterol Hepatol. 2010;25:1079–86. - PubMed
    1. Le Berre C, Trang-Poisson C, Bourreille A. Small bowel capsule endoscopy and treat-to-target in Crohn’s disease: A systematic review. World J Gastroenterol. 2019;25:4534–54. - PMC - PubMed
    1. Leong AFPK. Wireless capsule endoscopy: Light at the end of the tunnel for obscure gastrointestinal bleeding. Singapore Med J. 2003;44:496–7. - PubMed