Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jul 8;23(1):467.
doi: 10.1186/s12903-023-03188-4.

Is automatic cephalometric software using artificial intelligence better than orthodontist experts in landmark identification?

Affiliations

Is automatic cephalometric software using artificial intelligence better than orthodontist experts in landmark identification?

Huayu Ye et al. BMC Oral Health. .

Abstract

Background: To evaluate the techniques used for the automatic digitization of cephalograms using artificial intelligence algorithms, highlighting the strengths and weaknesses of each one and reviewing the percentage of success in localizing each cephalometric point.

Methods: Lateral cephalograms were digitized and traced by three calibrated senior orthodontic residents with or without artificial intelligence (AI) assistance. The same radiographs of 43 patients were uploaded to AI-based machine learning programs MyOrthoX, Angelalign, and Digident. Image J was used to extract x- and y-coordinates for 32 cephalometric points: 11 soft tissue landmarks and 21 hard tissue landmarks. The mean radical errors (MRE) were assessed radical to the threshold of 1.0 mm,1.5 mm, and 2 mm to compare the successful detection rate (SDR). One-way ANOVA analysis at a significance level of P < .05 was used to compare MRE and SDR. The SPSS (IBM-vs. 27.0) and PRISM (GraphPad-vs.8.0.2) software were used for the data analysis.

Results: Experimental results showed that three methods were able to achieve detection rates greater than 85% using the 2 mm precision threshold, which is the acceptable range in clinical practice. The Angelalign group even achieved a detection rate greater than 78.08% using the 1.0 mm threshold. A marked difference in time was found between the AI-assisted group and the manual group due to heterogeneity in the performance of techniques to detect the same landmark.

Conclusions: AI assistance may increase efficiency without compromising accuracy with cephalometric tracings in routine clinical practice and research settings.

Keywords: Artificial intelligence; Automatic digitization; Cephalometric tracings.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Fig. 1
Fig. 1
The 32 anatomical landmarks used in this challenge. All landmarks are defined and explained in Table 1
Fig. 2
Fig. 2
Cephalometric tracing of anatomical structures in three AI-assisted programs. Sample lateral cephalometric radiograph with a 30-mm ruler uploaded to the MyOrthoX, Angelalign, and Digident programs
Fig. 3
Fig. 3
Mean radical error (MRE) for each landmark measured by three AI-assisted programs
Fig. 4
Fig. 4
Mean radical error (MRE) for all Landmarks measured by three AI-assisted programs. One-way ANOVA analysis was applied to compare the MRE among the groups
Fig. 5
Fig. 5
Comparison among the three AI-assisted groups in terms of the successful detection rate (SDR). One-way ANOVA analysis was applied to compare the average SDR among the groups within 1.0, 1.5, and 2.0 mm thresholds. Statistical significance was set at a p-value < 0.05
Fig. 6
Fig. 6
Mean time needed for landmark detection for the manual group, AI group, and AI-assisted group. Paired t-test was conducted to compare the average time for cephalometric analysis between AI-assistant groups (MyOrthoX-assisted, Angelalign-assisted, and Digident-assisted) and manual groups. Statistical significance was set at a p-value < 0.05

References

    1. Leonardi R, Giordano D, Maiorana F, Spampinato C. Automatic cephalometric analysis. Angle Orthod. 2008;78:145–151. doi: 10.2319/120506-491.1. - DOI - PubMed
    1. Houston WJ, Maher RE, Mcelroy D, Sherriff M. Sources of error in measurements from cephalometric radiographs. Eur J Orthod. 1986;8:149–151. doi: 10.1093/ejo/8.3.149. - DOI - PubMed
    1. Bilgir E, Bayrakdar IS, Celik O, Orhan K, Akkoca F, Saglam H, Odabas A, Aslan AF, Ozcetin C, Killi M, Rozylo-Kalinowska I. An artificial intelligence approach to automatic tooth detection and numbering in panoramic radiographs. BMC Med Imaging. 2021;21:124. doi: 10.1186/s12880-021-00656-7. - DOI - PMC - PubMed
    1. Dreyer KJ, Geis JR. When machines think: radiology's next frontier. Radiology. 2017;285:713–718. doi: 10.1148/radiol.2017171183. - DOI - PubMed
    1. Yasaka K, Akai H, Kunimatsu A, Kiryu S, Abe O. Deep learning with convolutional neural network in radiology. Jpn J Radiol. 2018;36:257–272. doi: 10.1007/s11604-018-0726-3. - DOI - PubMed

Publication types

LinkOut - more resources