RadBERT: Adapting Transformer-based Language Models to Radiology
- PMID: 35923376
- PMCID: PMC9344353
- DOI: 10.1148/ryai.210258
RadBERT: Adapting Transformer-based Language Models to Radiology
Abstract
Purpose: To investigate if tailoring a transformer-based language model to radiology is beneficial for radiology natural language processing (NLP) applications.
Materials and methods: This retrospective study presents a family of bidirectional encoder representations from transformers (BERT)-based language models adapted for radiology, named RadBERT. Transformers were pretrained with either 2.16 or 4.42 million radiology reports from U.S. Department of Veterans Affairs health care systems nationwide on top of four different initializations (BERT-base, Clinical-BERT, robustly optimized BERT pretraining approach [RoBERTa], and BioMed-RoBERTa) to create six variants of RadBERT. Each variant was fine-tuned for three representative NLP tasks in radiology: (a) abnormal sentence classification: models classified sentences in radiology reports as reporting abnormal or normal findings; (b) report coding: models assigned a diagnostic code to a given radiology report for five coding systems; and (c) report summarization: given the findings section of a radiology report, models selected key sentences that summarized the findings. Model performance was compared by bootstrap resampling with five intensively studied transformer language models as baselines: BERT-base, BioBERT, Clinical-BERT, BlueBERT, and BioMed-RoBERTa.
Results: For abnormal sentence classification, all models performed well (accuracies above 97.5 and F1 scores above 95.0). RadBERT variants achieved significantly higher scores than corresponding baselines when given only 10% or less of 12 458 annotated training sentences. For report coding, all variants outperformed baselines significantly for all five coding systems. The variant RadBERT-BioMed-RoBERTa performed the best among all models for report summarization, achieving a Recall-Oriented Understudy for Gisting Evaluation-1 score of 16.18 compared with 15.27 by the corresponding baseline (BioMed-RoBERTa, P < .004).
Conclusion: Transformer-based language models tailored to radiology had improved performance of radiology NLP tasks compared with baseline transformer language models.Keywords: Translation, Unsupervised Learning, Transfer Learning, Neural Networks, Informatics Supplemental material is available for this article. © RSNA, 2022See also commentary by Wiggins and Tejani in this issue.
Keywords: Informatics; Neural Networks; Transfer Learning; Translation; Unsupervised Learning.
© 2022 by the Radiological Society of North America, Inc.
Conflict of interest statement
Disclosures of conflicts of interest: A.Y. No relevant relationships. J.M. No relevant relationships. X.L. No relevant relationships. J.D. No relevant relationships. E.Y.C. No relevant relationships. A.G. Department of Defense grant paid to UCSD (covers a small percentage of the author's salary). C.N.H. No relevant relationships.
Figures
References
-
- Vaswani A , Shazeer N , Parmar N , et al. . Attention is all you need . In: Advances in Neural Information Processing Systems 30 (NIPS 2017) , 2017. ; 5998 – 6008 . https://papers.nips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-... .
-
- Devlin J , Chang MW , Lee K , Toutanova K . BERT: Pretraining of deep bidirectional transformers for language understanding . arXiv:1810.04805 [preprint] https://arxiv.org/abs/1810.04805. Posted October 11, 2018. Accessed June 7, 2022 .
-
- Liu Y , Ott M , Goyal N , et al. . RoBERTa: A robustly optimized BERT pretraining approach . arXiv:1907.11692 [preprint] https://arxiv.org/abs/1907.11692. Posted July 26, 2019. Accessed June 7, 2022 .
-
- Mikolov T , Chen K , Corrado G , Dean J . Efficient estimation of word representations in vector space . arXiv:1301.3781 [preprint] https://arxiv.org/abs/1301.3781. Posted January 16, 2013. Accessed June 7, 2022 .
