Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Jun 2:27:e71236.
doi: 10.2196/71236.

Trust, Trustworthiness, and the Future of Medical AI: Outcomes of an Interdisciplinary Expert Workshop

Affiliations

Trust, Trustworthiness, and the Future of Medical AI: Outcomes of an Interdisciplinary Expert Workshop

Melanie Goisauf et al. J Med Internet Res. .

Abstract

Trustworthiness has become a key concept for the ethical development and application of artificial intelligence (AI) in medicine. Various guidelines have formulated key principles, such as fairness, robustness, and explainability, as essential components to achieve trustworthy AI. However, conceptualizations of trustworthy AI often emphasize technical requirements and computational solutions, frequently overlooking broader aspects of fairness and potential biases. These include not only algorithmic bias but also human, institutional, social, and societal factors, which are critical to foster AI systems that are both ethically sound and socially responsible. This viewpoint article presents an interdisciplinary approach to analyzing trust in AI and trustworthy AI within the medical context, focusing on (1) social sciences and humanities conceptualizations and legal perspectives on trust and (2) their implications for trustworthy AI in health care. It focuses on real-world challenges in medicine that are often underrepresented in theoretical discussions to propose a more practice-oriented understanding. Insights were gathered from an interdisciplinary workshop with experts from various disciplines involved in the development and application of medical AI, particularly in oncological imaging and genomics, complemented by theoretical approaches related to trust in AI. Results emphasize that, beyond common issues of bias and fairness, knowledge and human involvement are essential for trustworthy AI. Stakeholder engagement throughout the AI life cycle emerged as crucial, supporting a human- and multicentered framework for trustworthy AI implementation. Findings emphasize that trust in medical AI depends on providing meaningful, user-oriented information and balancing knowledge with acceptable uncertainty. Experts highlighted the importance of confidence in the tool's functionality, specifically that it performs as expected. Trustworthiness was shown to be not a feature but rather a relational process, involving humans, their expertise, and the broader social or institutional contexts in which AI tools operate. Trust is dynamic, shaped by interactions among individuals, technologies, and institutions, and ultimately centers on people rather than tools alone. Tools are evaluated based on reliability and credibility, yet trust fundamentally relies on human connections. The article underscores the development of AI tools that are not only technically sound but also ethically robust and broadly accepted by end users, contributing to more effective and equitable AI-mediated health care. Findings highlight that building AI trustworthiness in health care requires a human-centered, multistakeholder approach with diverse and inclusive engagement. To promote equity, we recommend that AI development teams involve all relevant stakeholders at every stage of the AI lifecycle-from conception, technical development, clinical validation, and real-world deployment.

Keywords: artificial intelligence; ethics of AI; human-centered AI; interdisciplinarity; medicine; stakeholder engagement; trust; trustworthy AI.

PubMed Disclaimer

Conflict of interest statement

Conflicts of Interest: None declared.

Similar articles

References

    1. Ethics guidelines for trustworthy AI. European Commission. 2019. [2021-12-28]. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trust... .
    1. Ethics and governance of artificial intelligence for health: WHO guidance. World Health Organization. 2021. [3024-10-15]. https://iris.who.int/bitstream/handle/10665/341996/9789240029200-eng.pdf... .
    1. Lekadir K, Frangi A, Porras A. FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare. BMJ. 2025;388:r340. doi: 10.1136/bmj.r340. https://www.bmj.com/lookup/pmidlookup?view=long&pmid=39961614 - DOI - PMC - PubMed
    1. Serban A, Blom K, Hoos H, Visser J. 2021 IEEE/ACM 1st Workshop on AI Engineering - Software Engineering for AI (WAIN) Piscataway, NJ: IEEE; 2021. Practices for engineering trustworthy machine learning applications; pp. 97–100.
    1. Mittelstadt B. Principles alone cannot guarantee ethical AI. Nat Mach Intell. 2019;1(11):501–507. doi: 10.1038/s42256-019-0114-4. - DOI

LinkOut - more resources