Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Feb 20;15(2):e35237.
doi: 10.7759/cureus.35237. eCollection 2023 Feb.

Applicability of ChatGPT in Assisting to Solve Higher Order Problems in Pathology

Affiliations

Applicability of ChatGPT in Assisting to Solve Higher Order Problems in Pathology

Ranwir K Sinha et al. Cureus. .

Abstract

Background Artificial intelligence (AI) is evolving for healthcare services. Higher cognitive thinking in AI refers to the ability of the system to perform advanced cognitive processes, such as problem-solving, decision-making, reasoning, and perception. This type of thinking goes beyond simple data processing and involves the ability to understand and manipulate abstract concepts, interpret, and use information in a contextually relevant way, and generate new insights based on past experiences and accumulated knowledge. Natural language processing models like ChatGPT is a conversational program that can interact with humans to provide answers to queries. Objective We aimed to ascertain the capability of ChatGPT in solving higher-order reasoning in the subject of pathology. Methods This cross-sectional study was conducted on the internet using an AI-based chat program that provides free service for research purposes. The current version of ChatGPT (January 30 version) was used to converse with a total of 100 higher-order reasoning queries. These questions were randomly selected from the question bank of the institution and categorized according to different systems. The responses to each question were collected and stored for further analysis. The responses were evaluated by three expert pathologists on a zero to five scale and categorized into the structure of the observed learning outcome (SOLO) taxonomy categories. The score was compared by a one-sample median test with hypothetical values to find its accuracy. Result A total of 100 higher-order reasoning questions were solved by the program in an average of 45.31±7.14 seconds for an answer. The overall median score was 4.08 (Q1-Q3: 4-4.33) which was below the hypothetical maximum value of five (one-test median test p <0.0001) and similar to four (one-test median test p = 0.14). The majority (86%) of the responses were in the "relational" category in the SOLO taxonomy. There was no difference in the scores of the responses for questions asked from various organ systems in the subject of Pathology (Kruskal Wallis p = 0.55). The scores rated by three pathologists had an excellent level of inter-rater reliability (ICC = 0.975 [95% CI: 0.965-0.983]; F = 40.26; p < 0.0001). Conclusion The capability of ChatGPT to solve higher-order reasoning questions in pathology had a relational level of accuracy. Hence, the text output had connections among its parts to provide a meaningful response. The answers from the program can score approximately 80%. Hence, academicians or students can get help from the program for solving reasoning-type questions also. As the program is evolving, further studies are needed to find its accuracy level in any further versions.

Keywords: artificial intelligence; chatgpt; cognition; critical reasoning; decision making; intelligence; microcomputers; pathologists; problem-solving; students.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Figure 1
Figure 1. Brief study method flow chart
SOLO: Structure of the Observed Learning Outcome
Figure 2
Figure 2. System-wise average scores of the responses
Figure 3
Figure 3. The category of response according to the structure of observed learning outcome taxonomy
Figure 4
Figure 4. The scores of the responses (on a scale ranging from 0 to 5) by three raters
Figure 5
Figure 5. Screenshot showing part of a conversation with ChatGPT
Figure 6
Figure 6. Screenshot showing a part of a conversation with ChatGPT

References

    1. Artificial intelligence in healthcare. Ognjanovic I. Stud Health Technol Inform. 2020;274:189–205. - PubMed
    1. Artificial intelligence in pathology. Försch S, Klauschen F, Hufnagl P, Roth W. Dtsch Arztebl Int. 2021;118:194–204. - PMC - PubMed
    1. Digital pathology and artificial intelligence. Niazi MK, Parwani AV, Gurcan MN. Lancet Oncol. 2019;20:0–61. - PMC - PubMed
    1. Cognitive psychology-based artificial intelligence review. Zhao J, Wu M, Zhou L, Wang X, Jia J. Front Neurosci. 2022;16:1024316. - PMC - PubMed
    1. Opportunities and challenges of artificial intelligence in the medical field: current application, emerging problems, and problem-solving strategies. Jiang L, Wu Z, Xu X, Zhan Y, Jin X, Wang L, Qiu Y. J Int Med Res. 2021;49:3000605211000157. - PMC - PubMed

LinkOut - more resources