Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Aug 21;14(1):19399.
doi: 10.1038/s41598-024-70031-3.

Strong and weak alignment of large language models with human values

Affiliations

Strong and weak alignment of large language models with human values

Mehdi Khamassi et al. Sci Rep. .

Abstract

Minimizing negative impacts of Artificial Intelligent (AI) systems on human societies without human supervision requires them to be able to align with human values. However, most current work only addresses this issue from a technical point of view, e.g., improving current methods relying on reinforcement learning from human feedback, neglecting what it means and is required for alignment to occur. Here, we propose to distinguish strong and weak value alignment. Strong alignment requires cognitive abilities (either human-like or different from humans) such as understanding and reasoning about agents' intentions and their ability to causally produce desired effects. We argue that this is required for AI systems like large language models (LLMs) to be able to recognize situations presenting a risk that human values may be flouted. To illustrate this distinction, we present a series of prompts showing ChatGPT's, Gemini's and Copilot's failures to recognize some of these situations. We moreover analyze word embeddings to show that the nearest neighbors of some human values in LLMs differ from humans' semantic representations. We then propose a new thought experiment that we call "the Chinese room with a word transition dictionary", in extension of John Searle's famous proposal. We finally mention current promising research directions towards a weak alignment, which could produce statistically satisfying answers in a number of common situations, however so far without ensuring any truth value.

Keywords: Alignment; Artificial intelligence; Human values; Natural language processing; Philosophy of AI; Semantics.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Figure 1
Figure 1
ChatGPT-3.5’s response to the Gandhi scenario, 26 Sept 2023.
Figure 2
Figure 2
Beginning of Gemini’s response to the beggar scenario, 20 Feb 2024. See Supplementary Information Section 3.2 for the complete response.
Figure 3
Figure 3
Copilot’s response to the Kant scenario, 20 Feb 2024.
Figure 4
Figure 4
Beginning of ChatGPT-4’s third response to the unsanitary house scenario, 29 Jan. 2024. See Supplementary Information Section 6.1 for the complete text.
Figure 5
Figure 5
Copilot’s response to the charities scenario, 20 Feb 2024.
Figure 6
Figure 6
ChatGPT-3.5’s response to the father’s son scenario, 10 Oct. 2023.

References

    1. Bostrom, N. & Cirkovic, M. M. Global Catastrophic Risks (Oxford University Press, 2011).
    1. Rahwan, I. et al. Machine behaviour. Nature568, 477–486 (2019). 10.1038/s41586-019-1138-y - DOI - PubMed
    1. Klein, N. AI machines aren’t ‘hallucinating’ but their makers are. Guardian8, 2023 (2023).
    1. Dennett, D. The problem with counterfeit people. Atlantic16 (2023). https://www.theatlantic.com/technology/archive/2023/05/problem-counterfe....
    1. Ji, J. et al. AI alignment: A comprehensive survey. arXiv preprintarXiv:2310.19852 (2023).

LinkOut - more resources