ChatGPT for digital pathology research
- PMID: 38987117
- PMCID: PMC11299190
- DOI: 10.1016/S2589-7500(24)00114-6
ChatGPT for digital pathology research
Abstract
The rapid evolution of generative artificial intelligence (AI) models including OpenAI's ChatGPT signals a promising era for medical research. In this Viewpoint, we explore the integration and challenges of large language models (LLMs) in digital pathology, a rapidly evolving domain demanding intricate contextual understanding. The restricted domain-specific efficiency of LLMs necessitates the advent of tailored AI tools, as illustrated by advancements seen in the last few years including FrugalGPT and BioBERT. Our initiative in digital pathology emphasises the potential of domain-specific AI tools, where a curated literature database coupled with a user-interactive web application facilitates precise, referenced information retrieval. Motivated by the success of this initiative, we discuss how domain-specific approaches substantially minimise the risk of inaccurate responses, enhancing the reliability and accuracy of information extraction. We also highlight the broader implications of such tools, particularly in streamlining access to scientific research and democratising access to computational pathology techniques for scientists with little coding experience. This Viewpoint calls for an enhanced integration of domain-specific text-generation AI tools in academic settings to facilitate continuous learning and adaptation to the dynamically evolving landscape of medical research.
Copyright © 2024 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license. Published by Elsevier Ltd.. All rights reserved.
Conflict of interest statement
Declaration of interests ML's work is supported by the National Cancer Institute (grants P50CA211024 and P01CA265768), the USA Department of Defense (grant DoD PC160357), and the Prostate Cancer Foundation. LM and MO are supported by the National Cancer Institute (grant U54CA273956). All other authors declare no competing interests.
References
-
- Brown TB, Mann B, Ryder N, et al. Language models are few-shot learners. arXiv 2020; published online May 28. https://arxiv.org/abs/2005.14165v4 (preprint).
-
- Open AI. GPT-4 technical report. arXiv 2023; published online March 15. https://arxiv.org/abs/2303.08774v3 (preprint).
-
- Devlin J, Chang M-W, Lee K, Toutanova K. BERT: pre-training of deep bidirectional transformers for language understanding. arXiv 2019; published online May 24. 10.48550/arXiv.1810.04805 (preprint). - DOI
-
- Raffel C, Shazeer N, Roberts A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv 2023; published online Sept 19. 10.48550/arXiv.1910.10683. - DOI
-
- Thoppilan R, De Freitas D, Hall J, et al. LaMDA: language models for dialog applications. arXiv 2022; published online Feb 10. 10.48550/arXiv.2201.08239 (preprint). - DOI
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources