Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Feb 26:6:1211564.
doi: 10.3389/fdgth.2024.1211564. eCollection 2024.

Neural machine translation of clinical text: an empirical investigation into multilingual pre-trained language models and transfer-learning

Affiliations

Neural machine translation of clinical text: an empirical investigation into multilingual pre-trained language models and transfer-learning

Lifeng Han et al. Front Digit Health. .

Abstract

Clinical text and documents contain very rich information and knowledge in healthcare, and their processing using state-of-the-art language technology becomes very important for building intelligent systems for supporting healthcare and social good. This processing includes creating language understanding models and translating resources into other natural languages to share domain-specific cross-lingual knowledge. In this work, we conduct investigations on clinical text machine translation by examining multilingual neural network models using deep learning such as Transformer based structures. Furthermore, to address the language resource imbalance issue, we also carry out experiments using a transfer learning methodology based on massive multilingual pre-trained language models (MMPLMs). The experimental results on three sub-tasks including (1) clinical case (CC), (2) clinical terminology (CT), and (3) ontological concept (OC) show that our models achieved top-level performances in the ClinSpEn-2022 shared task on English-Spanish clinical domain data. Furthermore, our expert-based human evaluations demonstrate that the small-sized pre-trained language model (PLM) outperformed the other two extra-large language models by a large margin in the clinical domain fine-tuning, which finding was never reported in the field. Finally, the transfer learning method works well in our experimental setting using the WMT21fb model to accommodate a new language space Spanish that was not seen at the pre-training stage within WMT21fb itself, which deserves more exploitation for clinical knowledge transformation, e.g. to investigate into more languages. These research findings can shed some light on domain-specific machine translation development, especially in clinical and healthcare fields. Further research projects can be carried out based on our work to improve healthcare text analytics and knowledge transformation. Our data is openly available for research purposes at: https://github.com/HECTA-UoM/ClinicalNMT.

Keywords: Neural machine translation; Spanish-English translation; clinical knowledge transformation; clinical text translation; large language model; multilingual pre-trained language model; transfer learning.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
Illustration of the Investigation Workflow.
Figure 2
Figure 2
Marian Pre-Trained NMT - Training Pipeline.
Figure 3
Figure 3
Several Attention based Transformer NMT structure (16).
Figure 4
Figure 4
original dense Transformer (left) vs MoE Transformer (right) (13).
Figure 5
Figure 5
MoE vs Conditional MoE (13).
Figure 6
Figure 6
Comparison of Automatic Evaluations against Human Evaluation (HOPE).
Figure 7
Figure 7
Summary of Human Expert-Based Evaluations.
Figure 8
Figure 8
Confidence Intervals of Three Models (M, N, W): Clinical-Marian, Clinical-NLLB, and Clinical-WMT21fb.
Figure 9
Figure 9
Task-1 Cases/Sentences EN-ES Translation Examples: clinic-WMT21fb vs clinic-NLLB.
Figure 10
Figure 10
Task-2 Clinical Term ES-EN Translation Examples: clinic-WMT21fb vs clinic-NLLB.
Figure 11
Figure 11
Task-3 Concept EN-ES Translation Examples: clinic-WMT21fb vs clinic-NLLB.
Figure A1
Figure A1
MarianNMT Fine-Tuning Parameters: Encoder and Decoder with 6+6 Layers.
Figure A2
Figure A2
M2M-100 Model Structure For Conditional Generation: Encoder and Decoder Parameters with 24+24 Layers.

Similar articles

Cited by

References

    1. Griciūtė B, Han L, Li H, Nenadic G. Topic modelling of Swedish newspaper articles about coronoavirus: A Case Study using latent girichlet allocation method. IEEE 11th International Conference on Healthcare Informatics (ICHI); Houston, TX, USA; 2023. (2023). p. 627–36. 10.1109/ICHI57859.2023.00110 - DOI
    1. Oyebode O, Ndulue C, Adib A, Mulchandani D, Suruliraj B, Orji FA, et al. Health, psychosocial, and social issues emanating from the COVID-19 pandemic based on social media comments: text mining, thematic analysis approach. JMIR Med Inform. (2021) 9:e22734. 10.2196/22734 - DOI - PMC - PubMed
    1. Luo X, Gandhi P, Storey S, Huang K. A deep language model for symptom extraction from clinical text and its application to extract COVID-19 symptoms from social media. IEEE J Biomed Health Inform. (2022) 26:1737–48. 10.1109/JBHI.2021.3123192 - DOI - PMC - PubMed
    1. Henry S, Buchan K, Filannino M, Stubbs A, Uzuner O. 2018 n2c2 shared task on adverse drug events and medication extraction in electronic health records. J Am Med Inform Assoc. (2020) 27:3–12. 10.1093/jamia/ocz166 - DOI - PMC - PubMed
    1. Spasic I, Nenadic G. Clinical text data in machine learning: systematic review. JMIR Med Inform. (2020) 8:e17984. 10.2196/17984 - DOI - PMC - PubMed

LinkOut - more resources