Ethical framework for responsible foundational models in medical imaging
- PMID: 40458639
- PMCID: PMC12128638
- DOI: 10.3389/fmed.2025.1544501
Ethical framework for responsible foundational models in medical imaging
Abstract
The emergence of foundational models represents a paradigm shift in medical imaging, offering extraordinary capabilities in disease detection, diagnosis, and treatment planning. These large-scale artificial intelligence systems, trained on extensive multimodal and multi-center datasets, demonstrate remarkable versatility across diverse medical applications. However, their integration into clinical practice presents complex ethical challenges that extend beyond technical performance metrics. This study examines the critical ethical considerations at the intersection of healthcare and artificial intelligence. Patient data privacy remains a fundamental concern, particularly given these models' requirement for extensive training data and their potential to inadvertently memorize sensitive information. Algorithmic bias poses a significant challenge in healthcare, as historical disparities in medical data collection may perpetuate or exacerbate existing healthcare inequities across demographic groups. The complexity of foundational models presents significant challenges regarding transparency and explainability in medical decision-making. We propose a comprehensive ethical framework that addresses these challenges while promoting responsible innovation. This framework emphasizes robust privacy safeguards, systematic bias detection and mitigation strategies, and mechanisms for maintaining meaningful human oversight. By establishing clear guidelines for development and deployment, we aim to harness the transformative potential of foundational models while preserving the fundamental principles of medical ethics and patient-centered care.
Keywords: ethical AI; fairness; foundational models; medical imaging; responsible AI.
Copyright © 2025 Jha, Durak, Das, Sanjotra, Susladkar, Sarkar, Rauniyar, Kumar Tomar, Peng, Li, Biswas, Aktas, Keles, Antalek, Zhang, Wang, Zhu, Pan, Seyithanoglu, Medetalibeyoglu, Sharma, Cicek, Rahsepar, Hendrix, Cetin, Aydogan, Abazeed, Miller, Keswani, Savas, Jambawalikar, Ladner, Borhani, Spampinato, Wallace and Bagci.
Conflict of interest statement
UB acknowledges the following COI: Ther-AI LLC. MW acknowledges the following COIs: Boston Scientific, ClearNote Health, Cosmo Pharmaceuticals, Endostart, Endiatix, Fujifilm, Medtronic, Surgical Automations, Ohelio Ltd, Venn Bioscience, Virgo Inc., Surgical Automation, and Microtek. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.
Figures
References
-
- Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, et al. Language models are few-shot learners. In: Advances in Neural Information Processing Systems. (2020). p. 33.
-
- STAT News. Epic Overhauls Sepsis Algorithm After Finding Biases in its Training Data. (2022).
-
- Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez I. Attention is all you need. In: Advances in Neural Information Processing Systems. Long Beach, CA. (2017). p. 30.
Publication types
LinkOut - more resources
Full Text Sources
