Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT
- PMID: 38177754
- PMCID: PMC10766525
- DOI: 10.1038/s43588-023-00527-x
Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT
Abstract
We design a battery of semantic illusions and cognitive reflection tests, aimed to elicit intuitive yet erroneous responses. We administer these tasks, traditionally used to study reasoning and decision-making in humans, to OpenAI's generative pre-trained transformer model family. The results show that as the models expand in size and linguistic proficiency they increasingly display human-like intuitive system 1 thinking and associated cognitive errors. This pattern shifts notably with the introduction of ChatGPT models, which tend to respond correctly, avoiding the traps embedded in the tasks. Both ChatGPT-3.5 and 4 utilize the input-output context window to engage in chain-of-thought reasoning, reminiscent of how people use notepads to support their system 2 thinking. Yet, they remain accurate even when prevented from engaging in chain-of-thought reasoning, indicating that their system-1-like next-word generation processes are more accurate than those of older models. Our findings highlight the value of applying psychological methodologies to study large language models, as this can uncover previously undetected emergent characteristics.
© 2023. The Author(s).
Conflict of interest statement
The authors declare no competing interests.
Figures



References
-
- Wei, J. et al. Emergent abilities of large language models. Transactions on Machine Learning Research (2022).
-
- Schaeffer, R., Miranda, B. & Koyejo, S. Are emergent abilities of large language models a mirage? Preprint at https://arxiv.org/abs/2304.15004 (2023).
-
- Brown, T. B. et al. Language models are few-shot learners. Preprint at https://arxiv.org/abs/2005.14165 (2020).
-
- Wei, J. et al. Chain of thought prompting elicits reasoning in large language models. 36th Conference on Neural Information Processing Systems (2022).
-
- Hagendorff, T. Deception abilities emerged in large language models. Preprint at https://arxiv.org/abs/2307.16513 (2023). - PMC - PubMed