Translating radiology reports into plain language using ChatGPT and GPT-4 with prompt learning: results, limitations, and potential
- PMID: 37198498
- PMCID: PMC10192466
- DOI: 10.1186/s42492-023-00136-5
Translating radiology reports into plain language using ChatGPT and GPT-4 with prompt learning: results, limitations, and potential
Abstract
The large language model called ChatGPT has drawn extensively attention because of its human-like expression and reasoning abilities. In this study, we investigate the feasibility of using ChatGPT in experiments on translating radiology reports into plain language for patients and healthcare providers so that they are educated for improved healthcare. Radiology reports from 62 low-dose chest computed tomography lung cancer screening scans and 76 brain magnetic resonance imaging metastases screening scans were collected in the first half of February for this study. According to the evaluation by radiologists, ChatGPT can successfully translate radiology reports into plain language with an average score of 4.27 in the five-point system with 0.08 places of information missing and 0.07 places of misinformation. In terms of the suggestions provided by ChatGPT, they are generally relevant such as keeping following-up with doctors and closely monitoring any symptoms, and for about 37% of 138 cases in total ChatGPT offers specific suggestions based on findings in the report. ChatGPT also presents some randomness in its responses with occasionally over-simplified or neglected information, which can be mitigated using a more detailed prompt. Furthermore, ChatGPT results are compared with a newly released large model GPT-4, showing that GPT-4 can significantly improve the quality of translated reports. Our results show that it is feasible to utilize large language models in clinical education, and further efforts are needed to address limitations and maximize their potential.
Keywords: Artificial intelligence; ChatGPT; Large language model; Patient education; Radiology report.
© 2023. The Author(s).
Conflict of interest statement
The authors declare that they have no competing interests.
Figures
References
-
- ChatGPT sets record for fastest-growing user base-analyst note. https://www.marketscreener.com/news/latest/ChatGPT-sets-record-for-faste.... Accessed 20 Feb 2023
-
- ChatGPT reaches 100 million users two months after launch. https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-u.... Accessed 20 Feb 2023
-
- Devlin J, Chang MW, Lee K, Toutanova K (2019) BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (Long and Short Papers), Association for Computational Linguistics, Minneapolis, 2-7 June 2019
-
- Yang ZL, Dai ZH, Yang YM, Carbonell J, Salakhutdinov R, Le QV (2019) XLNet: Generalized autoregressive pretraining for language understanding. In: Proceedings of the 33rd international conference on neural information processing systems, Curran Associates Inc., Vancouver, 8 December 2019
-
- Radford A, Narasimhan K, Salimans T, Sutskever I (2018) Improving language understanding by generative pre-training
LinkOut - more resources
Full Text Sources