Should we be afraid of medical AI?
- PMID: 31227547
- DOI: 10.1136/medethics-2018-105281
Should we be afraid of medical AI?
Abstract
I analyse an argument according to which medical artificial intelligence (AI) represents a threat to patient autonomy-recently put forward by Rosalind McDougall in the Journal of Medical Ethics The argument takes the case of IBM Watson for Oncology to argue that such technologies risk disregarding the individual values and wishes of patients. I find three problems with this argument: (1) it confuses AI with machine learning; (2) it misses machine learning's potential for personalised medicine through big data; (3) it fails to distinguish between evidence-based advice and decision-making within healthcare. I conclude that how much and which tasks we should delegate to machine learning and other technologies within healthcare and beyond is indeed a crucial question of our time, but in order to answer it, we must be careful in analysing and properly distinguish between the different systems and different delegated tasks.
Keywords: ethics.
© Author(s) (or their employer(s)) 2019. No commercial re-use. See rights and permissions. Published by BMJ.
Conflict of interest statement
Competing interests: None declared.
Comment in
-
No we shouldn't be afraid of medical AI; it involves risks and opportunities.J Med Ethics. 2019 Aug;45(8):559. doi: 10.1136/medethics-2019-105572. Epub 2019 Jun 21. J Med Ethics. 2019. PMID: 31227546
Comment on
-
Computer knows best? The need for value-flexibility in medical AI.J Med Ethics. 2019 Mar;45(3):156-160. doi: 10.1136/medethics-2018-105118. Epub 2018 Nov 22. J Med Ethics. 2019. PMID: 30467198
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources