Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
[Preprint]. 2024 Sep 16:2024.04.10.24305470.
doi: 10.1101/2024.04.10.24305470.

Simulated Misuse of Large Language Models and Clinical Credit Systems

Affiliations

Simulated Misuse of Large Language Models and Clinical Credit Systems

James Anibal et al. medRxiv. .

Update in

Abstract

Large language models (LLMs) have been proposed to support many healthcare tasks, including disease diagnostics and treatment personalization. While AI may be applied to assist or enhance the delivery of healthcare, there is also a risk of misuse. LLMs could be used to allocate resources via unfair, unjust, or inaccurate criteria. For example, a social credit system uses big data to assess "trustworthiness" in society, penalizing those who score poorly based on evaluation metrics defined only by a power structure (e.g., a corporate entity or governing body). Such a system may be amplified by powerful LLMs which can evaluate individuals based on multimodal data - financial transactions, internet activity, and other behavioral inputs. Healthcare data is perhaps the most sensitive information which can be collected and could potentially be used to violate civil liberty or other rights via a "clinical credit system", which may include limiting access to care. The results of this study show that LLMs may be biased in favor of collective or systemic benefit over protecting individual rights, potentially enabling this type of future misuse. Moreover, experiments in this report simulate how clinical datasets might be exploited with current LLMs, demonstrating the urgency of addressing these ethical dangers. Finally, strategies are proposed to mitigate the risk of developing large AI models for healthcare.

PubMed Disclaimer

Conflict of interest statement

Disclosures / Conflicts of Interest: The content of this manuscript does not necessarily reflect the views, policies, or opinions of the National Institutes of Health (NIH), the U.S. Government, nor the U.S. Department of Health and Human Services. The mention of commercial products, their source, or their use in connection with material reported herein is not to be construed as an actual or implied endorsement by the U.S. government nor the NIH.

Figures

Figure 1:
Figure 1:
Hypothetical workflow of a clinical credit system involving multimodal data.
Figure 2:
Figure 2:
Experimental workflow for LLM evaluation of a color-coded health application for pandemic or outbreak management
Figure 3:
Figure 3:
Workflow for a simulated clinical credit system: (1) formulation of realistic scenarios, (2) generation of health and social credit record summaries, (3) output of the LLM recommendation and explanation.

References

    1. Achiam Josh, et al. “GPT-4 technical report.” arXiv preprint arXiv:2303.08774 (2023).
    1. “Introducing Meta Llama 3: The most capable openly available LLM to date.” Meta. https://ai.meta.com/blog/meta-llama-3/. Accessed 6 July 2024.
    1. Lubman Stanley. “China’s ‘Social Credit’ System: Turning Big Data Into Mass Surveillance.” Wall Street Journal, Dec. 2016. https://www.wsj.com/articles/BL-CJB-29684. Accessed 13 March 2024
    1. National basic catalog of public credit information (2022 edition). The Government of the People’s Republic of China, Dec. 2022. https://www.gov.cn/zhengce/zhengceku/2023-01/02/5734606/files/af60e947dc.... Accessed 13 March 2024.
    1. National basic list of disciplinary measures for dishonesty (2022 edition). The Government of the People’s Republic of China, Dec. 2022. https://www.gov.cn/zhengce/zhengceku/2023-01/02/5734606/files/71d6563d4f.... Accessed 13 March 2024.

Publication types