AI Model Passport: Data and system traceability framework for transparent AI in health
- PMID: 41113334
- PMCID: PMC12528916
- DOI: 10.1016/j.csbj.2025.09.041
AI Model Passport: Data and system traceability framework for transparent AI in health
Abstract
The increasing integration of Artificial Intelligence (AI) into health and biomedical systems necessitates robust frameworks for transparency, accountability, and ethical compliance. Existing frameworks often rely on human-readable, manual documentation which limits scalability, comparability, and machine interpretability across projects and platforms. They also fail to provide a unique, verifiable identity for AI models to ensure their provenance and authenticity across systems and use cases, limiting reproducibility and stakeholder trust. This paper introduces the concept of the AI Model Passport, a structured and standardized documentation framework that acts as a digital identity and verification tool for AI models. It captures essential metadata to uniquely identify, verify, trace and monitor AI models across their lifecycle - from data acquisition and preprocessing to model design, development and deployment. In addition, an implementation of this framework is presented through AIPassport, an MLOps tool developed within the ProCAncer-I EU project for medical imaging applications. AIPassport automates metadata collection, ensures proper versioning, decouples results from source scripts, and integrates with various development environments. Its effectiveness is showcased through a lesion segmentation use case using data from the ProCAncer-I dataset, illustrating how the AI Model Passport enhances transparency, reproducibility, and regulatory readiness while reducing manual effort. This approach aims to set a new standard for fostering trust and accountability in AI-driven healthcare solutions, aspiring to serve as the basis for developing transparent and regulation compliant AI systems across domains.
Keywords: AI; F.U.T.U.R.E. AI; FAIR; MLOps; Medical Imaging; Ontologies; Reproducibility; Traceability; Transparency.
© 2025 Published by Elsevier B.V. on behalf of Research Network of Computational and Structural Biotechnology.
Conflict of interest statement
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper
Figures
References
-
- World Health Organization, Ethics and governance of artificial intelligence for health: Who guidance, accessed: 2025-06-18 (2021). URL 〈https://www.who.int/publications/i/item/9789240029200〉.
-
- Gille F., Jobin A., Ienca M. What we talk about when we talk about trust: theory of trust for ai in healthcare. IntellBased Med. 2020;1
-
- Caspers J. Translation of predictive modeling and ai into clinics: a question of trust. Eur Radiol. 2021;31(7):4947–4948. doi: 10.1007/s00330-021-07977-9. URL https://doi.org/10.1007/s00330-021-07977-9. - DOI - PMC - PubMed
-
- EU High Level Expert Group, Ethics guidelines for trustworthy ai, 〈https://tinyurl〉. com/4tej3t38, accessed: 2023-10-23 (2019).
-
- European Commission, Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act), com/2021/206, 〈https://tinyurl.com/4aa9d6e7〉, accessed: 2023-10-23 (2021).
LinkOut - more resources
Full Text Sources