This is a preprint.
Simulated Misuse of Large Language Models and Clinical Credit Systems
- PMID: 38645190
- PMCID: PMC11030492
- DOI: 10.1101/2024.04.10.24305470
Simulated Misuse of Large Language Models and Clinical Credit Systems
Update in
-
Simulated misuse of large language models and clinical credit systems.NPJ Digit Med. 2024 Nov 11;7(1):317. doi: 10.1038/s41746-024-01306-2. NPJ Digit Med. 2024. PMID: 39528596 Free PMC article. Review.
Abstract
Large language models (LLMs) have been proposed to support many healthcare tasks, including disease diagnostics and treatment personalization. While AI may be applied to assist or enhance the delivery of healthcare, there is also a risk of misuse. LLMs could be used to allocate resources via unfair, unjust, or inaccurate criteria. For example, a social credit system uses big data to assess "trustworthiness" in society, penalizing those who score poorly based on evaluation metrics defined only by a power structure (e.g., a corporate entity or governing body). Such a system may be amplified by powerful LLMs which can evaluate individuals based on multimodal data - financial transactions, internet activity, and other behavioral inputs. Healthcare data is perhaps the most sensitive information which can be collected and could potentially be used to violate civil liberty or other rights via a "clinical credit system", which may include limiting access to care. The results of this study show that LLMs may be biased in favor of collective or systemic benefit over protecting individual rights, potentially enabling this type of future misuse. Moreover, experiments in this report simulate how clinical datasets might be exploited with current LLMs, demonstrating the urgency of addressing these ethical dangers. Finally, strategies are proposed to mitigate the risk of developing large AI models for healthcare.
Conflict of interest statement
Disclosures / Conflicts of Interest: The content of this manuscript does not necessarily reflect the views, policies, or opinions of the National Institutes of Health (NIH), the U.S. Government, nor the U.S. Department of Health and Human Services. The mention of commercial products, their source, or their use in connection with material reported herein is not to be construed as an actual or implied endorsement by the U.S. government nor the NIH.
Figures
References
-
- Achiam Josh, et al. “GPT-4 technical report.” arXiv preprint arXiv:2303.08774 (2023).
-
- “Introducing Meta Llama 3: The most capable openly available LLM to date.” Meta. https://ai.meta.com/blog/meta-llama-3/. Accessed 6 July 2024.
-
- Lubman Stanley. “China’s ‘Social Credit’ System: Turning Big Data Into Mass Surveillance.” Wall Street Journal, Dec. 2016. https://www.wsj.com/articles/BL-CJB-29684. Accessed 13 March 2024
-
- National basic catalog of public credit information (2022 edition). The Government of the People’s Republic of China, Dec. 2022. https://www.gov.cn/zhengce/zhengceku/2023-01/02/5734606/files/af60e947dc.... Accessed 13 March 2024.
-
- National basic list of disciplinary measures for dishonesty (2022 edition). The Government of the People’s Republic of China, Dec. 2022. https://www.gov.cn/zhengce/zhengceku/2023-01/02/5734606/files/71d6563d4f.... Accessed 13 March 2024.
Publication types
Grants and funding
LinkOut - more resources
Full Text Sources