How ChatGPT works: a mini review
- PMID: 37991499
- DOI: 10.1007/s00405-023-08337-7
How ChatGPT works: a mini review
Abstract
Objective: This paper offers a mini-review of OpenAI's language model, ChatGPT, detailing its mechanisms, applications in healthcare, and comparisons with other large language models (LLMs).
Methods: The underlying technology of ChatGPT is outlined, focusing on its neural network architecture, training process, and the role of key elements such as input embedding, encoder, decoder, attention mechanism, and output projection. The advancements in GPT-4, including its capacity for internet connection and the integration of plugins for enhanced functionality are discussed.
Results: ChatGPT can generate creative, coherent, and contextually relevant sentences, making it a valuable tool in healthcare for patient engagement, medical education, and clinical decision support. Yet, like other LLMs, it has limitations, including a lack of common sense knowledge, a propensity for hallucination of facts, a restricted context window, and potential privacy concerns.
Conclusion: Despite the limitations, LLMs like ChatGPT offer transformative possibilities for healthcare. With ongoing research in model interpretability, common-sense reasoning, and handling of longer context windows, their potential is vast. It is crucial for healthcare professionals to remain informed about these technologies and consider their ethical integration into practice.
Keywords: Artificial; ChatGPT; Chatbot; GPT; Head Neck; Medicine; Otolaryngology; Surgery.
© 2023. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
Similar articles
-
Evidence-based potential of generative artificial intelligence large language models in orthodontics: a comparative study of ChatGPT, Google Bard, and Microsoft Bing.Eur J Orthod. 2024 Apr 13:cjae017. doi: 10.1093/ejo/cjae017. Online ahead of print. Eur J Orthod. 2024. PMID: 38613510
-
Utility of Large Language Models for Health Care Professionals and Patients in Navigating Hematopoietic Stem Cell Transplantation: Comparison of the Performance of ChatGPT-3.5, ChatGPT-4, and Bard.J Med Internet Res. 2024 May 17;26:e54758. doi: 10.2196/54758. J Med Internet Res. 2024. PMID: 38758582 Free PMC article.
-
Triage Performance Across Large Language Models, ChatGPT, and Untrained Doctors in Emergency Medicine: Comparative Study.J Med Internet Res. 2024 Jun 14;26:e53297. doi: 10.2196/53297. J Med Internet Res. 2024. PMID: 38875696 Free PMC article.
-
Challenges and barriers of using large language models (LLM) such as ChatGPT for diagnostic medicine with a focus on digital pathology - a recent scoping review.Diagn Pathol. 2024 Feb 27;19(1):43. doi: 10.1186/s13000-024-01464-7. Diagn Pathol. 2024. PMID: 38414074 Free PMC article.
-
Artificial Intelligence for Anesthesiology Board-Style Examination Questions: Role of Large Language Models.J Cardiothorac Vasc Anesth. 2024 May;38(5):1251-1259. doi: 10.1053/j.jvca.2024.01.032. Epub 2024 Feb 1. J Cardiothorac Vasc Anesth. 2024. PMID: 38423884 Review.
Cited by
-
Assessing the accuracy of artificial intelligence in the diagnosis and management of orbital fractures: Is this the future of surgical decision-making?JPRAS Open. 2024 Sep 30;42:275-283. doi: 10.1016/j.jpra.2024.09.014. eCollection 2024 Dec. JPRAS Open. 2024. PMID: 39498287 Free PMC article.
-
Prospects and perils of ChatGPT in diabetes.World J Diabetes. 2025 Mar 15;16(3):98408. doi: 10.4239/wjd.v16.i3.98408. World J Diabetes. 2025. PMID: 40093292 Free PMC article.
-
An academic evaluation of ChatGpt's ability and accuracy in creating patient education resources for rare cardiovascular diseases.Sci Rep. 2025 Jul 17;15(1):25929. doi: 10.1038/s41598-025-11567-w. Sci Rep. 2025. PMID: 40676128 Free PMC article.
-
Poor performance of ChatGPT in clinical rule-guided dose interventions in hospitalized patients with renal dysfunction.Eur J Clin Pharmacol. 2024 Aug;80(8):1133-1140. doi: 10.1007/s00228-024-03687-5. Epub 2024 Apr 9. Eur J Clin Pharmacol. 2024. PMID: 38592470
-
ChatGPT-4 accuracy for patient education in laryngopharyngeal reflux.Eur Arch Otorhinolaryngol. 2024 May;281(5):2547-2552. doi: 10.1007/s00405-024-08560-w. Epub 2024 Mar 16. Eur Arch Otorhinolaryngol. 2024. PMID: 38492008
References
-
- OpenAI. GPT-4 Technical Report [Internet]. arXiv; 2023 [cited 2023 Apr 15]. Available from: http://arxiv.org/abs/2303.08774
-
- Chiesa-Estomba CM, Lechien JR, Vaira LA, Brunet A, Cammaroto G, Mayo-Yanez M et al (2023) Exploring the potential of Chat-GPT as a supportive tool for sialendoscopy clinical decision making and patient information support. Eur Arch Otorhinolaryngol. https://doi.org/10.1007/s00405-023-08104-8 - DOI - PubMed
-
- Kleebayoon A, Wiwanitkit V (2023) Rhinoplasty consultation with ChatGPT. Aesth Plast Surg. https://doi.org/10.1007/s00266-023-03394-z - DOI
-
- Lechien JR, Maniaci A, Gengler I, Hans S, Chiesa-Estomba CM, Vaira LA (2023) Validity and reliability of an instrument evaluating the performance of intelligent chatbot: the artificial intelligence performance instrument (AIPI). Eur Arch Otorhinolaryngol. https://doi.org/10.1007/s00405-023-08219-y - DOI - PubMed - PMC
-
- Hoch CC, Wollenberg B, Lüers JC, Knoedler S, Knoedler L, Frank K et al (2023) ChatGPT’s quiz skills in different otolaryngology subspecialties: an analysis of 2576 single-choice and multiple-choice board certification preparation questions. Eur Arch Otorhinolaryngol 280(9):4271–4278 - DOI - PubMed - PMC
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources