Trust, Trustworthiness, and the Future of Medical AI: Outcomes of an Interdisciplinary Expert Workshop
- PMID: 40455564
- PMCID: PMC12171647
- DOI: 10.2196/71236
Trust, Trustworthiness, and the Future of Medical AI: Outcomes of an Interdisciplinary Expert Workshop
Abstract
Trustworthiness has become a key concept for the ethical development and application of artificial intelligence (AI) in medicine. Various guidelines have formulated key principles, such as fairness, robustness, and explainability, as essential components to achieve trustworthy AI. However, conceptualizations of trustworthy AI often emphasize technical requirements and computational solutions, frequently overlooking broader aspects of fairness and potential biases. These include not only algorithmic bias but also human, institutional, social, and societal factors, which are critical to foster AI systems that are both ethically sound and socially responsible. This viewpoint article presents an interdisciplinary approach to analyzing trust in AI and trustworthy AI within the medical context, focusing on (1) social sciences and humanities conceptualizations and legal perspectives on trust and (2) their implications for trustworthy AI in health care. It focuses on real-world challenges in medicine that are often underrepresented in theoretical discussions to propose a more practice-oriented understanding. Insights were gathered from an interdisciplinary workshop with experts from various disciplines involved in the development and application of medical AI, particularly in oncological imaging and genomics, complemented by theoretical approaches related to trust in AI. Results emphasize that, beyond common issues of bias and fairness, knowledge and human involvement are essential for trustworthy AI. Stakeholder engagement throughout the AI life cycle emerged as crucial, supporting a human- and multicentered framework for trustworthy AI implementation. Findings emphasize that trust in medical AI depends on providing meaningful, user-oriented information and balancing knowledge with acceptable uncertainty. Experts highlighted the importance of confidence in the tool's functionality, specifically that it performs as expected. Trustworthiness was shown to be not a feature but rather a relational process, involving humans, their expertise, and the broader social or institutional contexts in which AI tools operate. Trust is dynamic, shaped by interactions among individuals, technologies, and institutions, and ultimately centers on people rather than tools alone. Tools are evaluated based on reliability and credibility, yet trust fundamentally relies on human connections. The article underscores the development of AI tools that are not only technically sound but also ethically robust and broadly accepted by end users, contributing to more effective and equitable AI-mediated health care. Findings highlight that building AI trustworthiness in health care requires a human-centered, multistakeholder approach with diverse and inclusive engagement. To promote equity, we recommend that AI development teams involve all relevant stakeholders at every stage of the AI lifecycle-from conception, technical development, clinical validation, and real-world deployment.
Keywords: artificial intelligence; ethics of AI; human-centered AI; interdisciplinarity; medicine; stakeholder engagement; trust; trustworthy AI.
©Melanie Goisauf, Mónica Cano Abadía, Kaya Akyüz, Maciej Bobowicz, Alena Buyx, Ilaria Colussi, Marie-Christine Fritzsche, Karim Lekadir, Pekka Marttinen, Michaela Th Mayrhofer, Janos Meszaros. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 02.06.2025.
Conflict of interest statement
Conflicts of Interest: None declared.
Similar articles
-
Stakeholder Perspectives on Trustworthy AI for Parkinson Disease Management Using a Cocreation Approach: Qualitative Exploratory Study.J Med Internet Res. 2025 Aug 6;27:e73710. doi: 10.2196/73710. J Med Internet Res. 2025. PMID: 40768261 Free PMC article.
-
Stench of Errors or the Shine of Potential: The Challenge of (Ir)Responsible Use of ChatGPT in Speech-Language Pathology.Int J Lang Commun Disord. 2025 Jul-Aug;60(4):e70088. doi: 10.1111/1460-6984.70088. Int J Lang Commun Disord. 2025. PMID: 40627744 Review.
-
Trust in Artificial Intelligence-Based Clinical Decision Support Systems Among Health Care Workers: Systematic Review.J Med Internet Res. 2025 Jul 29;27:e69678. doi: 10.2196/69678. J Med Internet Res. 2025. PMID: 40772775 Review.
-
Enhancing education for children with ASD: a review of evaluation and measurement in AI tool implementation.Disabil Rehabil Assist Technol. 2025 Aug;20(6):1578-1595. doi: 10.1080/17483107.2025.2477678. Epub 2025 Mar 13. Disabil Rehabil Assist Technol. 2025. PMID: 40079459 Review.
-
Toward a responsible future: recommendations for AI-enabled clinical decision support.J Am Med Inform Assoc. 2024 Nov 1;31(11):2730-2739. doi: 10.1093/jamia/ocae209. J Am Med Inform Assoc. 2024. PMID: 39325508 Free PMC article.
References
-
- Ethics guidelines for trustworthy AI. European Commission. 2019. [2021-12-28]. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trust... .
-
- Ethics and governance of artificial intelligence for health: WHO guidance. World Health Organization. 2021. [3024-10-15]. https://iris.who.int/bitstream/handle/10665/341996/9789240029200-eng.pdf... .
-
- Lekadir K, Frangi A, Porras A. FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare. BMJ. 2025;388:r340. doi: 10.1136/bmj.r340. https://www.bmj.com/lookup/pmidlookup?view=long&pmid=39961614 - DOI - PMC - PubMed
-
- Serban A, Blom K, Hoos H, Visser J. 2021 IEEE/ACM 1st Workshop on AI Engineering - Software Engineering for AI (WAIN) Piscataway, NJ: IEEE; 2021. Practices for engineering trustworthy machine learning applications; pp. 97–100.
-
- Mittelstadt B. Principles alone cannot guarantee ethical AI. Nat Mach Intell. 2019;1(11):501–507. doi: 10.1038/s42256-019-0114-4. - DOI
MeSH terms
LinkOut - more resources
Full Text Sources