Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Feb 17;35(3):309-342.
doi: 10.1162/neco_a_01563.

Large Language Models and the Reverse Turing Test

Affiliations

Large Language Models and the Reverse Turing Test

Terrence J Sejnowski. Neural Comput. .

Abstract

Large language models (LLMs) have been transformative. They are pretrained foundational models that are self-supervised and can be adapted with fine-tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and, more recently, LaMDA, both of them LLMs, can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range of reactions and debate on whether these LLMs understand what they are saying or exhibit signs of intelligence. This high variance is exhibited in three interviews with LLMs reaching wildly different conclusions. A new possibility was uncovered that could explain this divergence. What appears to be intelligence in LLMs may in fact be a mirror that reflects the intelligence of the interviewer, a remarkable twist that could be considered a reverse Turing test. If so, then by studying interviews, we may be learning more about the intelligence and beliefs of the interviewer than the intelligence of the LLMs. As LLMs become more capable, they may transform the way we interact with machines and how they interact with each other. Increasingly, LLMs are being coupled with sensorimotor devices. LLMs can talk the talk, but can they walk the walk? A road map for achieving artificial general autonomy is outlined with seven major improvements inspired by brain systems and how LLMs could in turn be used to uncover new insights into brain function.

PubMed Disclaimer

Figures

Figure 1:
Figure 1:
Response to the prompt, “Create a sunset on Mars.”
Figure 2:
Figure 2:
Response to the prompt, “Create a sunset on Mars in the style of van Gogh.”
Figure 3:
Figure 3:
NETtalk is a feedforward neural network with one layer of hidden units that transformed text to speech. The 200 units and 18,000 weights in the network were trained with backpropagation of errors. Each word moved one letter at a time through a seven-letter window, and NETtalk was trained to assign the correct phoneme or sound to the center letter (Rosenberg & Sejnowski, 1987).
Figure 4:
Figure 4:
Estimated computation in days of peta floating point operations (1015) used to train network models as a function of their date of publication (from Mehonic & Kenyon, 2022). (Data sources: Sevilla et al., 2022; Amodei & Hernandez, 2018.)
Figure 5:
Figure 5:
Comparison between the transformer loop and the cortical-basal ganglia loop. (Left) Transformers have a feedforward autoregressive architecture that loops the output with the input to produce a sequence of words (Vaswani et al., 2017). The single encoder/decoder module shown can be stacked N layers deep (Nx). (Right) The topographically mapped motor cortex projects to the basal ganglia, looping back to the cortex to produce a sequence of actions, such as a sequence of words in a spoken sentence. All parts of the cortex project to the basal ganglia and similar loops between the prefrontal cortex and the basal ganglia produce sequences of thoughts.

References

    1. Abbott EA (1884). Flatland: A romance in many dimensions. Seeley.
    1. Ackley DH, Hinton GE, & Sejnowski TJ (1985). A learning algorithm for Boltzmann machines. Cognitive Science, 9, 147–169. 10.1207/s15516709cog0901_7 - DOI
    1. Agüera y Arcas B (2022a). Artificial neural networks are making strides towards consciousness. Economist (June 9).
    1. Agüera y Arcas B (2022b). Can machines learn how to behave? Medium, https://medium.com/@blaisea/can-machines-learn-how-to-behave-42a02a57fadb
    1. Allman JM (1999). Evolving brains. Scientific American Library.

Publication types