The doctor will polygraph you now
- PMID: 39759269
- PMCID: PMC11698301
- DOI: 10.1038/s44401-024-00001-4
The doctor will polygraph you now
Abstract
Artificial intelligence (AI) methods have been proposed for the prediction of social behaviors that could be reasonably understood from patient-reported information. This raises novel ethical concerns about respect, privacy, and control over patient data. Ethical concerns surrounding clinical AI systems for social behavior verification can be divided into two main categories: (1) the potential for inaccuracies/biases within such systems, and (2) the impact on trust in patient-provider relationships with the introduction of automated AI systems for "fact-checking", particularly in cases where the data/models may contradict the patient. Additionally, this report simulated the misuse of a verification system using patient voice samples and identified a potential LLM bias against patient-reported information in favor of multi-dimensional data and the outputs of other AI methods (i.e., "AI self-trust"). Finally, recommendations were presented for mitigating the risk that AI verification methods will cause harm to patients or undermine the purpose of the healthcare system.
Keywords: Ethics; Machine learning; Science, technology and society.
© The Author(s) 2024.
Conflict of interest statement
Competing interestsThe authors declare no competing interests.
Figures



References
-
- Johnson, K. A. et al. A machine learning model for the prediction of unhealthy alcohol use among women of childbearing age in Alabama. Alcohol Alcohol.59, agad075 (2024). - PubMed
-
- Franklin, J. M. et al. The relative benefits of claims and electronic health record data for predicting medication adherence trajectory. Am. Heart J.197, 153–162 (2018). - PubMed
LinkOut - more resources
Full Text Sources