Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Feb;36(2):121-133.
doi: 10.1111/bioe.12959. Epub 2021 Oct 18.

Should we replace radiologists with deep learning? Pigeons, error and trust in medical AI

Affiliations

Should we replace radiologists with deep learning? Pigeons, error and trust in medical AI

Ramón Alvarado. Bioethics. 2022 Feb.

Abstract

The sudden rise in the ability of machine learning methodology, such as deep neural networks, to identify and predict with great accuracy instances of malignant cell growth from radiological images has led prominent developers of this technology, such as Geoffrey Hinton, to hold the view that "[…] we should stop training radiologists." Similar views exist in other contexts regarding the replacement of humans with artificial intelligence (AI) technologies. The assumption in these kinds of views is that deep neural networks are better than human radiologists in that they are more accurate, less costly, and have more predictive power than their human counterparts. In this paper, I argue that these considerations, even if true, are simply inadequate as reasons for us to allocate the kind of trust suggested by Hinton and others to these sorts of artifacts. In particular, I show that if the same considerations were true of something other than an AI device, say a pigeon, we would not have sufficient reason to trust them in the same way as suggested of deep neural networks in a medical setting. If this is the case then these considerations are also insufficient to trust AI enough to replace radiologists. Furthermore, I argue that the reliability of AI methodologies such as deep neural networks-which are at the center of this argument-is something that has not yet been established, and doing so faces fundamental challenges. Because of these challenges, it is not possible to ascribe the level of reliability expected from the deployment of a medical device. So, not only are the reasons cited in favor of the deployment of AI technologies in medical settings not sufficient/adequate even if they are true, but knowing whether they are true or not faces non-trivial epistemic challenges. If this is so, then we have no good reasons to advocate replacing radiologists with AI methodologies such as deep neural networks.

Keywords: deep learning; epistemic opacity; error; medical AI; neural networks; radiology.

PubMed Disclaimer

LinkOut - more resources