Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Oct;2(10):879-885.
doi: 10.1302/2633-1462.210.BJO-2021-0133.

An increasing number of convolutional neural networks for fracture recognition and classification in orthopaedics : are these externally validated and ready for clinical application?

Collaborators, Affiliations

An increasing number of convolutional neural networks for fracture recognition and classification in orthopaedics : are these externally validated and ready for clinical application?

Luisa Oliveira E Carmo et al. Bone Jt Open. 2021 Oct.

Abstract

Aims: The number of convolutional neural networks (CNN) available for fracture detection and classification is rapidly increasing. External validation of a CNN on a temporally separate (separated by time) or geographically separate (separated by location) dataset is crucial to assess generalizability of the CNN before application to clinical practice in other institutions. We aimed to answer the following questions: are current CNNs for fracture recognition externally valid?; which methods are applied for external validation (EV)?; and, what are reported performances of the EV sets compared to the internal validation (IV) sets of these CNNs?

Methods: The PubMed and Embase databases were systematically searched from January 2010 to October 2020 according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. The type of EV, characteristics of the external dataset, and diagnostic performance characteristics on the IV and EV datasets were collected and compared. Quality assessment was conducted using a seven-item checklist based on a modified Methodologic Index for NOn-Randomized Studies instrument (MINORS).

Results: Out of 1,349 studies, 36 reported development of a CNN for fracture detection and/or classification. Of these, only four (11%) reported a form of EV. One study used temporal EV, one conducted both temporal and geographical EV, and two used geographical EV. When comparing the CNN's performance on the IV set versus the EV set, the following were found: AUCs of 0.967 (IV) versus 0.975 (EV), 0.976 (IV) versus 0.985 to 0.992 (EV), 0.93 to 0.96 (IV) versus 0.80 to 0.89 (EV), and F1-scores of 0.856 to 0.863 (IV) versus 0.757 to 0.840 (EV).

Conclusion: The number of externally validated CNNs in orthopaedic trauma for fracture recognition is still scarce. This greatly limits the potential for transfer of these CNNs from the developing institute to another hospital to achieve similar diagnostic performance. We recommend the use of geographical EV and statements such as the Consolidated Standards of Reporting Trials-Artificial Intelligence (CONSORT-AI), the Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence (SPIRIT-AI) and the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis-Machine Learning (TRIPOD-ML) to critically appraise performance of CNNs and improve methodological rigor, quality of future models, and facilitate eventual implementation in clinical practice. Cite this article: Bone Jt Open 2021;2(10):879-885.

Keywords: Artificial intelligence; CT scans; Convolutional neural networks; Deep learning; External validation; Machine learning; Prognosis; cadaveric studies; distal radius fractures; elbows; hip; orthopaedic surgeons; orthopaedic trauma; radiographs; variances.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Overview of common methodology used to develop and evaluate convolutional neural networks. Development starts with a database that is then split into a training (used for development) and internal validation set (used for evaluating performance). Subsequently, an external validation can be performed to assess generalizability of the model. This can be done using data from the same hospital but during a different time period (temporal) or, ideally, with data from another hospital (geographical).
Fig. 2
Fig. 2
This Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flowchart describes the inclusion, exclusion, and selection of articles yielded in our search.

References

    1. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44–56. - PubMed
    1. Choy G, Khalilzadeh O, Michalski M, et al. . Current Applications and Future Impact of Machine Learning in Radiology. Radiology. 2018;288(2):318–328. - PMC - PubMed
    1. Liu X, Rivera SC, Faes L, et al. . Reporting guidelines for clinical trials evaluating artificial intelligence interventions are needed. Nat Med. 2019;25(10):1467–1468. - PubMed
    1. Collins GS, Moons KGM. Reporting of artificial intelligence prediction models. Lancet. 2019;393(10181):1577–1579:S0140-6736(19)30037-6. - PubMed
    1. Adams M, Chen W, Holcdorf D, McCusker MW, Howe PDL, Gaillard F. Computer vs human: Deep learning versus perceptual training for the detection of neck of femur fractures. J Med Imaging Radiat Oncol. 2019;63(1):27–32. - PubMed