Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Aug 22;12(1):14283.
doi: 10.1038/s41598-022-18249-x.

Field validation of deep learning based Point-of-Care device for early detection of oral malignant and potentially malignant disorders

Affiliations

Field validation of deep learning based Point-of-Care device for early detection of oral malignant and potentially malignant disorders

Praveen Birur N et al. Sci Rep. .

Abstract

Early detection of oral cancer in low-resource settings necessitates a Point-of-Care screening tool that empowers Frontline-Health-Workers (FHW). This study was conducted to validate the accuracy of Convolutional-Neural-Network (CNN) enabled m(mobile)-Health device deployed with FHWs for delineation of suspicious oral lesions (malignant/potentially-malignant disorders). The effectiveness of the device was tested in tertiary-care hospitals and low-resource settings in India. The subjects were screened independently, either by FHWs alone or along with specialists. All the subjects were also remotely evaluated by oral cancer specialist/s. The program screened 5025 subjects (Images: 32,128) with 95% (n = 4728) having telediagnosis. Among the 16% (n = 752) assessed by onsite specialists, 20% (n = 102) underwent biopsy. Simple and complex CNN were integrated into the mobile phone and cloud respectively. The onsite specialist diagnosis showed a high sensitivity (94%), when compared to histology, while telediagnosis showed high accuracy in comparison with onsite specialists (sensitivity: 95%; specificity: 84%). FHWs, however, when compared with telediagnosis, identified suspicious lesions with less sensitivity (60%). Phone integrated, CNN (MobileNet) accurately delineated lesions (n = 1416; sensitivity: 82%) and Cloud-based CNN (VGG19) had higher accuracy (sensitivity: 87%) with tele-diagnosis as reference standard. The results of the study suggest that an automated mHealth-enabled, dual-image system is a useful triaging tool and empowers FHWs for oral cancer screening in low-resource settings.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Figure 1
Figure 1
Study consort chart. A total of 5025 subjects were recruited for study according to inclusion and exclusion criteria. From 5025 subjects, a total of 32,128 images were captured using the phone with wide FOV (WLI and AFI = 7576 pairs) and probe with the focused view (WLI and AFI = 8488 pairs). More images were recorded of the same lesions using the probe. Among the subjects, 297 cases were not interpretable (WLI) by remote specialists and hence excluded from further analysis. All the subjects (n = 4728) were directly visualized by FHW and assessed by the remote specialist. The probe images were used for training/cross-validation (n = 5329 pairs) and testing (n = 1416 pairs). Out of 4728, 16% (n = 752) cases were seen directly by the onsite specialist. Onsite specialists identified 515 cases as suspicious and 20% (n = 102) underwent incisional biopsy and histology evaluation. Remote specialists and FHWs identified a suspicious lesion in 2004 and 1807 subjects respectively. FOV field of view, FHW Front-Line-Health Worker, WLI White Light Imaging, AFI Auto-fluorescence Imaging.
Figure 2
Figure 2
Pipeline of artificial intelligence-based image classification. The feature extraction of probe WLI and AFI was performed using green and red channels of WLI and the normalized ratio of AFI-red/green channels (a). These images were fused to feed the neural networks for real-time analysis in phones and also for cloud systems. The efficient version of Convolutional Neural Network (CNN) was built which runs on a smartphone device (b) classified images as suspicious or non-suspicious. Another more complex CNN based on Bayesian deep learning framework was built (c) for predicting uncertainty along with diagnosis in a cloud system. CNN Convolution Neural Network, WLI White Light Imaging, AFI Auto-fluorescence Imaging).
Figure 3
Figure 3
Diagnostic model in Low resource setting. The study depicted that a dual-mode imaging system deployed with Front Line Health workers (FHW) improved the diagnostic efficacy in delineating suspicious oral potentially malignant and malignant lesions. In low resource settings with poor internet connectivity, the mobile phone-based neural network, MobileNet, which had an accuracy of 79%, can be used. If connectivity is good (++), a more accurate cloud-based neural network (VGG19-BDL, accuracy = 87%), should be used and difficult cases with high uncertainty value should be referred to remote specialists (accuracy = 92%) for interpretation. NN Neural Network, FHW Front-Line-Health-Workers, VGG19-BDL: VGG19 Convolution Neural Network with Bayesian prediction. Images Primary health care systems, FHW,Remote specialist were created with “BioRender.com” (https://biorender.com) and Cloud based neural network with Microsoft Paint Windows.
Figure 4
Figure 4
Study design. The study participants were recruited from the tertiary cancer center, dental hospitals, low resource settings like primary health centers, and community camps by Front-Line-Health-Workers (FHWs) (a, b). The clinical history and images were recorded using the dual-mode imaging device (c). FHWs have undergone training for using the dual-mode imaging device and also for identifying Oral potentially malignant (OPMD) and malignant lesions. FHWs diagnosed oral lesions as suspicious if it is OPMD or malignant lesions by direct visual examination. The subjects in dental and tertiary hospitals were re-examined by an onsite specialist by direct visual examination (d) and recommended for biopsy (e) when required. The onsite specialist diagnosis was compared with histology (reference standard). The images captured by FHWs were uploaded to Microsoft Azure cloud (f) and interpreted by a remote specialist (g). The tele-diagnosis of remote specialists was compared with the onsite specialist diagnosis (reference standard). The probe images were used for the development of multiple deep learning neural networks (f) compared with remote specialist’s diagnosis as the reference standard (g). The neural network was integrated with mobile phones (h) for testing artificial intelligence-based (AI) diagnosis. The FHW’s and AI diagnosis were compared with the remote specialist (reference standard). (FHW Front-Line-Health-Worker, AI artificial intelligence). Images (a, b, d, g) were created with “BioRender.com” (https://biorender.com), Image (c), “The 3D model of the device” was created by solidworks 2020. Image (f) was created with Microsoft Paint Windows, along with the original images (e, h) of the study.
Figure 5
Figure 5
Hub-and-spoke model for data collection. The study nodal centers enrolled FHWs from different study populations (a) and participants were recruited according to inclusion and exclusion criteria. The demographics, clinical history of habits/lesions, and images were recorded using a dual-mode imaging device. Intraoral images were captured in the WLI and AFI dual-mode, a large view of the lesion was captured by phone camera, and a more focused image using probe camera by FHW (a). The case report form and images were uploaded by trained FHWs to Microsoft Azure Cloud (b). The data was stored in a NoSQL database (MongoDB) and was assessed by the study coordinator and remote specialist using the graphic user interface. The interpretation of the remote specialist was sent back to the phone operated by FHW (blue line showing reverse data flow). The study coordinator checked the completeness of data and a specialist checked the quality of images and feedback was sent to respective nodal centers bi-weekly. The data was recorded offline and an AES-256 data encryption algorithm was used to store the data in offline (b). FHW Front-Line-Health Worker, WLI White Light Imaging, AFI Auto-fluorescence Imaging, KLE, CIHSR, and MSMC: nodal centers. Images of FHW, Study monitoring were created with “BioRender.com” (https://biorender.com).
Figure 6
Figure 6
Dual-mode images and diagnosis. The dual-mode imaging device recorded WLI/AFI of the wide field of view (FOV) using a phone camera and focused probe image. The figures depict the diagnosis of FHWs, onsite specialists, remote specialists, CNN (MobileNet), and histology diagnosis of cases. WLI White Light Imaging, AFI Auto-fluorescence Imaging, FHW Front-Line-Health Worker, WDSCC Well-differentiated Squamous Cell Carcinoma, MDSCC moderately differentiated squamous cell carcinoma, CNN Convolutional Neural Network.

References

    1. Sciubba JJ. Oral cancer: The importance of early diagnosis and treatment. Am. J. Clin. Dermatol. 2001;2:239–251. doi: 10.2165/00128071-200102040-00005. - DOI - PubMed
    1. Sung H, et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2021;71:209–249. doi: 10.3322/caac.21660. - DOI - PubMed
    1. Warnakulasuriya S, Johnson NW, van der Waal I. Nomenclature and classification of potentially malignant disorders of the oral mucosa. J. Oral Pathol. Med. 2007;36:575–580. doi: 10.1111/j.1600-0714.2007.00582.x. - DOI - PubMed
    1. Mignogna MD, Fedele S, Lo Russo L, Ruoppo E, Lo Muzio L. Oral and pharyngeal cancer: Lack of prevention and early detection by health care providers. Eur. J. Cancer. 2001;10:381–383. doi: 10.1097/00008469-200108000-00014. - DOI - PubMed
    1. Shah I, et al. Clinical stage of oral cancer patients at the time of initial diagnosis. J. Ayub Med. Coll. 2010;22:61–63. - PubMed

Publication types