Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2022 May 30:5:879603.
doi: 10.3389/frai.2022.879603. eCollection 2022.

Transparency of AI in Healthcare as a Multilayered System of Accountabilities: Between Legal Requirements and Technical Limitations

Affiliations
Review

Transparency of AI in Healthcare as a Multilayered System of Accountabilities: Between Legal Requirements and Technical Limitations

Anastasiya Kiseleva et al. Front Artif Intell. .

Abstract

The lack of transparency is one of the artificial intelligence (AI)'s fundamental challenges, but the concept of transparency might be even more opaque than AI itself. Researchers in different fields who attempt to provide the solutions to improve AI's transparency articulate different but neighboring concepts that include, besides transparency, explainability and interpretability. Yet, there is no common taxonomy neither within one field (such as data science) nor between different fields (law and data science). In certain areas like healthcare, the requirements of transparency are crucial since the decisions directly affect people's lives. In this paper, we suggest an interdisciplinary vision on how to tackle the issue of AI's transparency in healthcare, and we propose a single point of reference for both legal scholars and data scientists on transparency and related concepts. Based on the analysis of the European Union (EU) legislation and literature in computer science, we submit that transparency shall be considered the "way of thinking" and umbrella concept characterizing the process of AI's development and use. Transparency shall be achieved through a set of measures such as interpretability and explainability, communication, auditability, traceability, information provision, record-keeping, data governance and management, and documentation. This approach to deal with transparency is of general nature, but transparency measures shall be always contextualized. By analyzing transparency in the healthcare context, we submit that it shall be viewed as a system of accountabilities of involved subjects (AI developers, healthcare professionals, and patients) distributed at different layers (insider, internal, and external layers, respectively). The transparency-related accountabilities shall be built-in into the existing accountability picture which justifies the need to investigate the relevant legal frameworks. These frameworks correspond to different layers of the transparency system. The requirement of informed medical consent correlates to the external layer of transparency and the Medical Devices Framework is relevant to the insider and internal layers. We investigate the said frameworks to inform AI developers on what is already expected from them with regards to transparency. We also discover the gaps in the existing legislative frameworks concerning AI's transparency in healthcare and suggest the solutions to fill them in.

Keywords: accountability; artificial intelligence (AI); explainability; healthcare; informed medical consent; interpretability; medical devices; transparency.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
Activities associated in the EU legislation (listed in Annex I) with transparency measures.
Figure 2
Figure 2
XAI Word Cloud created by Adadi and Berrada (2018).
Figure 3
Figure 3
Multilayered System of AI's Transparency in Healthcare.

References

    1. Adadi A., Berrada M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160. 10.1109/ACCESS.2018.2870052 - DOI
    1. AI Central (2021). Data Science Institute, American College of Radiology, Database of the FDA Approved AI-based Medical Devices. Available online at: https://aicentral.acrdsi.org
    1. AI HLEG (2019). Ethics Guidelines for Trustworthy AI. [ebook] Brussels: European Commission.
    1. Astromské K., Peičius E., Astromskis P. (2021). Ethical and legal challenges of informed consent applying artificial intelligence in medical diagnostic consultations. AI and SOCIETY 36, 509–520 10.1007/s00146-020-01008-9 - DOI
    1. Belle V., Papantonis I. (2021). Principles and practice of explainable machine learning' front. Big Data 4:688969. 10.3389/fdata.2021.688969 - DOI - PMC - PubMed

LinkOut - more resources