Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2024 Aug;6(8):e595-e600.
doi: 10.1016/S2589-7500(24)00114-6. Epub 2024 Jul 9.

ChatGPT for digital pathology research

Affiliations
Review

ChatGPT for digital pathology research

Mohamed Omar et al. Lancet Digit Health. 2024 Aug.

Abstract

The rapid evolution of generative artificial intelligence (AI) models including OpenAI's ChatGPT signals a promising era for medical research. In this Viewpoint, we explore the integration and challenges of large language models (LLMs) in digital pathology, a rapidly evolving domain demanding intricate contextual understanding. The restricted domain-specific efficiency of LLMs necessitates the advent of tailored AI tools, as illustrated by advancements seen in the last few years including FrugalGPT and BioBERT. Our initiative in digital pathology emphasises the potential of domain-specific AI tools, where a curated literature database coupled with a user-interactive web application facilitates precise, referenced information retrieval. Motivated by the success of this initiative, we discuss how domain-specific approaches substantially minimise the risk of inaccurate responses, enhancing the reliability and accuracy of information extraction. We also highlight the broader implications of such tools, particularly in streamlining access to scientific research and democratising access to computational pathology techniques for scientists with little coding experience. This Viewpoint calls for an enhanced integration of domain-specific text-generation AI tools in academic settings to facilitate continuous learning and adaptation to the dynamically evolving landscape of medical research.

PubMed Disclaimer

Conflict of interest statement

Declaration of interests ML's work is supported by the National Cancer Institute (grants P50CA211024 and P01CA265768), the USA Department of Defense (grant DoD PC160357), and the Prostate Cancer Foundation. LM and MO are supported by the National Cancer Institute (grant U54CA273956). All other authors declare no competing interests.

References

    1. Brown TB, Mann B, Ryder N, et al. Language models are few-shot learners. arXiv 2020; published online May 28. https://arxiv.org/abs/2005.14165v4 (preprint).
    1. Open AI. GPT-4 technical report. arXiv 2023; published online March 15. https://arxiv.org/abs/2303.08774v3 (preprint).
    1. Devlin J, Chang M-W, Lee K, Toutanova K. BERT: pre-training of deep bidirectional transformers for language understanding. arXiv 2019; published online May 24. 10.48550/arXiv.1810.04805 (preprint). - DOI
    1. Raffel C, Shazeer N, Roberts A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv 2023; published online Sept 19. 10.48550/arXiv.1910.10683. - DOI
    1. Thoppilan R, De Freitas D, Hall J, et al. LaMDA: language models for dialog applications. arXiv 2022; published online Feb 10. 10.48550/arXiv.2201.08239 (preprint). - DOI

Publication types

LinkOut - more resources