A practical guide to the implementation of AI in orthopaedic research - part 1: opportunities in clinical application and overcoming existing challenges
- PMID: 37968370
- PMCID: PMC10651597
- DOI: 10.1186/s40634-023-00683-z
A practical guide to the implementation of AI in orthopaedic research - part 1: opportunities in clinical application and overcoming existing challenges
Abstract
Artificial intelligence (AI) has the potential to transform medical research by improving disease diagnosis, clinical decision-making, and outcome prediction. Despite the rapid adoption of AI and machine learning (ML) in other domains and industry, deployment in medical research and clinical practice poses several challenges due to the inherent characteristics and barriers of the healthcare sector. Therefore, researchers aiming to perform AI-intensive studies require a fundamental understanding of the key concepts, biases, and clinical safety concerns associated with the use of AI. Through the analysis of large, multimodal datasets, AI has the potential to revolutionize orthopaedic research, with new insights regarding the optimal diagnosis and management of patients affected musculoskeletal injury and disease. The article is the first in a series introducing fundamental concepts and best practices to guide healthcare professionals and researcher interested in performing AI-intensive orthopaedic research studies. The vast potential of AI in orthopaedics is illustrated through examples involving disease- or injury-specific outcome prediction, medical image analysis, clinical decision support systems and digital twin technology. Furthermore, it is essential to address the role of human involvement in training unbiased, generalizable AI models, their explainability in high-risk clinical settings and the implementation of expert oversight and clinical safety measures for failure. In conclusion, the opportunities and challenges of AI in medicine are presented to ensure the safe and ethical deployment of AI models for orthopaedic research and clinical application. Level of evidence IV.
Keywords: AI; Artificial intelligence; Decision support systems; Digital twins; Ethics; Explainability; Generalizability; Large language models; Learning series; ML; Machine learning; Orthopaedics; Provenance; Research methods.
© 2023. The Author(s).
Conflict of interest statement
ASH is an industrial PhD student at Medfield Diagnostics AB, funded by the Wallenberg AI, Autonomous Systems and Software Program (WASP). MTH is a consultant for Medacta, Symbios and Depuy Synthess. KS is a member on the board of directors for Getinge AB (publ). RF is Chief Technology Officer and founder in Accelerandium AB, a software consultancy company.
Figures
References
-
- Abdar M, Pourpanah F, Hussain S, Rezazadegan D, Liu L, Ghavamzadeh M, et al. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Inf Fusion. 2021;76:243–297.
-
- Acosta JN, Falcone GJ, Rajpurkar P, Topol EJ. Multimodal biomedical AI. Nat Med. 2022;28:1773–1784. - PubMed
-
- Autor D, Levy F, Murnane R. The skill content of recent technological change: an empirical exploration. Quart J Econ. 2003;118:1279–1333.
-
- Bareinboim E, Correa JD, Ibeling D, Icard T.(2022) On Pearl’s hierarchy and the foundations of causal inference. In: Probabilistic and causal inference: the works of judea pearl. ACM Books p.507–556
Publication types
LinkOut - more resources
Full Text Sources