Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Apr 15;382(2270):20230254.
doi: 10.1098/rsta.2023.0254. Epub 2024 Feb 26.

GPT-4 passes the bar exam

Affiliations

GPT-4 passes the bar exam

Daniel Martin Katz et al. Philos Trans A Math Phys Eng Sci. .

Abstract

In this paper, we experimentally evaluate the zero-shot performance of GPT-4 against prior generations of GPT on the entire uniform bar examination (UBE), including not only the multiple-choice multistate bar examination (MBE), but also the open-ended multistate essay exam (MEE) and multistate performance test (MPT) components. On the MBE, GPT-4 significantly outperforms both human test-takers and prior models, demonstrating a 26% increase over ChatGPT and beating humans in five of seven subject areas. On the MEE and MPT, which have not previously been evaluated by scholars, GPT-4 scores an average of 4.2/6.0 when compared with much lower scores for ChatGPT. Graded across the UBE components, in the manner in which a human test-taker would be, GPT-4 scores approximately 297 points, significantly in excess of the passing threshold for all UBE jurisdictions. These findings document not just the rapid and remarkable advance of large language model performance generally, but also the potential for such models to support the delivery of legal services in society. This article is part of the theme issue 'A complexity science approach to law and governance'.

Keywords: Bar Exam; GPT-4; large language models; legal complexity; legal language; legal services.

PubMed Disclaimer

Conflict of interest statement

Two authors are affiliated with Casetext LLC which is a for profit legal technology company. Two authors are affiliated with 273 Ventures which is a for profit legal technology company.

Figures

Figure 1.
Figure 1.
Progression of recent GPT models on the multistate bar exam (MBE). (Online version in colour.)
Figure 2.
Figure 2.
Progression of recent GPT models by legal subject area. (Online version in colour.)

References

    1. Coupette C, Beckedorf J, Hartung D, Bommarito M, Katz DM. 2021. Measuring law over time: a network analytical framework with an application to statutes and regulations in the United States and Germany. Front. Phys. 9, 658463. ( 10.3389/fphy.2021.658463) - DOI
    1. Chalkidis I, Jana A, Hartung D, Bommarito M, Androutsopoulos I, Katz D, Aletras N. 2022. LexGLUE: a benchmark dataset for legal language understanding in english. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pp. 4310–4330.
    1. Friedrich R. 2021. Complexity and entropy in legal language. Front. Phys. 9, 671882. ( 10.3389/fphy.2021.671882) - DOI
    1. Katz DM, Bommarito MJ. 2014. Measuring the complexity of the law: the United States Code. Artif. Intell. Law 22, 337-374. ( 10.1007/s10506-014-9160-8) - DOI
    1. Ruhl JB. 2007. Law’s complexity: a primer. Ga. St. UL Rev. 24, 885. ( 10.58948/0738-6206.1052) - DOI