A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable
- PMID: 38226965
- PMCID: PMC11248995
- DOI: 10.1080/15265161.2023.2296402
A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable
Abstract
When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such a PPP were more accurate, on average, than human surrogates in identifying patient preferences, the proposed algorithm would nevertheless fail to respect the patient's (former) autonomy since it draws on the 'wrong' kind of data: namely, data that are not specific to the individual patient and which therefore may not reflect their actual values, or their reasons for having the preferences they do. Taking such criticisms on board, we here propose a new approach: the Personalized Patient Preference Predictor (P4). The P4 is based on recent advances in machine learning, which allow technologies including large language models to be more cheaply and efficiently 'fine-tuned' on person-specific data. The P4, unlike the PPP, would be able to infer an individual patient's preferences from material (e.g., prior treatment decisions) that is in fact specific to them. Thus, we argue, in addition to being potentially more accurate at the individual level than the previously proposed PPP, the predictions of a P4 would also more directly reflect each patient's own reasons and values. In this article, we review recent discoveries in artificial intelligence research that suggest a P4 is technically feasible, and argue that, if it is developed and appropriately deployed, it should assuage some of the main autonomy-based concerns of critics of the original PPP. We then consider various objections to our proposal and offer some tentative replies.
Keywords: Advance directives; Patient Preference Predictor; algorithm; generative AI; large language models; substituted judgment.
Conflict of interest statement
Julian Savulescu is a Partner Investigator on an Australian Research Council grant LP190100841 which involves industry partnership from Illumina. He does not personally receive any funds from Illumina. JS is a Bioethics Committee consultant for Bayer.
JS received a fee for speaking as a panellist on a podcast sponsored by MyProtein (August 2020).
JS is an Advisory Panel member for the Hevolution Foundation (2022-).
Comment in
-
AUTOGEN and the Ethics of Co-Creation with Personalized LLMs-Reply to the Commentaries.Am J Bioeth. 2024 Mar;24(3):W6-W14. doi: 10.1080/15265161.2024.2308175. Epub 2024 Feb 12. Am J Bioeth. 2024. PMID: 38346141 No abstract available.
-
Personal but Necessarily Predictive? Developing a Bioethics Research Agenda for AI-Enabled Decision-Making Tools.Am J Bioeth. 2024 Jul;24(7):29-31. doi: 10.1080/15265161.2024.2353031. Epub 2024 Jun 24. Am J Bioeth. 2024. PMID: 38913464 Free PMC article. No abstract available.
-
The Problematic "Existence" of Digital Twins: Human Intention and Moral Decision.Am J Bioeth. 2024 Jul;24(7):45-47. doi: 10.1080/15265161.2024.2353831. Epub 2024 Jun 24. Am J Bioeth. 2024. PMID: 38913466 No abstract available.
-
Potentially Perilous Preference Parrots: Why Digital Twins Do Not Respect Patient Autonomy.Am J Bioeth. 2024 Jul;24(7):43-45. doi: 10.1080/15265161.2024.2353810. Epub 2024 Jun 24. Am J Bioeth. 2024. PMID: 38913469 No abstract available.
-
Respect for Autonomy Requires a Mental Model.Am J Bioeth. 2024 Jul;24(7):53-55. doi: 10.1080/15265161.2024.2353019. Epub 2024 Jun 24. Am J Bioeth. 2024. PMID: 38913470 No abstract available.
-
The Personalized Patient Preference Predictor: A Harmful and Misleading Solution Losing Sight of the Problem It Claims to Solve.Am J Bioeth. 2024 Jul;24(7):41-42. doi: 10.1080/15265161.2024.2353816. Epub 2024 Jun 24. Am J Bioeth. 2024. PMID: 38913471 No abstract available.
-
As an AI Model, I Cannot Replace Human Dialogue Processes. However, I Can Assist You in Identifying Potential Alternatives.Am J Bioeth. 2024 Jul;24(7):58-60. doi: 10.1080/15265161.2024.2353819. Epub 2024 Jun 24. Am J Bioeth. 2024. PMID: 38913474 No abstract available.
-
Weighing Patient Preferences: Lessons for a Patient Preferences Predictor.Am J Bioeth. 2024 Jul;24(7):38-40. doi: 10.1080/15265161.2024.2353023. Epub 2024 Jun 24. Am J Bioeth. 2024. PMID: 38913475 No abstract available.
-
Social Coercion, Patient Preferences, and AI-Substituted Judgments.Am J Bioeth. 2024 Jul;24(7):60-62. doi: 10.1080/15265161.2024.2353820. Epub 2024 Jun 24. Am J Bioeth. 2024. PMID: 38913476 No abstract available.
-
Artificial Intelligence, Digital Self, and the "Best Interests" Problem.Am J Bioeth. 2024 Jul;24(7):27-29. doi: 10.1080/15265161.2024.2353028. Epub 2024 Jun 24. Am J Bioeth. 2024. PMID: 38913477 No abstract available.
-
Machine Learning Algorithms in the Personalized Modeling of Incapacitated Patients' Decision Making-Is It a Viable Concept?Am J Bioeth. 2024 Jul;24(7):51-53. doi: 10.1080/15265161.2024.2353026. Epub 2024 Jun 24. Am J Bioeth. 2024. PMID: 38913480 No abstract available.
-
Personalized Patient Preference Predictors Are Neither Technically Feasible nor Ethically Desirable.Am J Bioeth. 2024 Jul;24(7):62-65. doi: 10.1080/15265161.2024.2353821. Epub 2024 Jun 24. Am J Bioeth. 2024. PMID: 38913484 No abstract available.
-
The Patient Preference Predictor: A Timely Boost for Personalized Medicine.Am J Bioeth. 2024 Jul;24(7):35-38. doi: 10.1080/15265161.2024.2353029. Epub 2024 Jun 24. Am J Bioeth. 2024. PMID: 38913485 No abstract available.
-
Parrots at the Bedside: Making Surrogate Decisions with Stochastic Strangers.Am J Bioeth. 2024 Jul;24(7):32-34. doi: 10.1080/15265161.2024.2353803. Epub 2024 Jun 24. Am J Bioeth. 2024. PMID: 38913490 Free PMC article. No abstract available.
References
-
- Askell, A., Bai Y., Chen A., Drain D., Ganguli D., Henighan T., Jones A., Joseph N., Mann B., DasSarma N., et al. 2021. A general language assistant as a laboratory for alignment. arXiv Preprint (1):1–48. doi: 10.48550/arXiv.2112.00861. - DOI
-
- Bakker, M., Chadwick M., Sheahan H., Tessler M., Campbell-Gillingham L., Balaguer J., McAleese N., Glaese A., Aslanides J., Botvinick M. M., et al. 2022. Fine-tuning language models to find agreement among humans with diverse preferences. Advances in Neural Information Processing Systems 35:38176–38189.
-
- Benzinger, L., Epping J., Ursin F., and Salloch S.. 2023. Artificial Intelligence to support ethical decision-making for incapacitated patients: A survey among German anesthesiologists and internists. Pre-print available at https://www.researchgate.net/publication/374530025. - PMC - PubMed
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources