Inspired Spine Smart Universal Resource Identifier (SURI): An Adaptive AI Framework for Transforming Multilingual Speech Into Structured Medical Reports
- PMID: 40291306
- PMCID: PMC12029695
- DOI: 10.7759/cureus.81243
Inspired Spine Smart Universal Resource Identifier (SURI): An Adaptive AI Framework for Transforming Multilingual Speech Into Structured Medical Reports
Abstract
Medical documentation is a major part of delivering healthcare worldwide and is gaining more importance in developing countries as well. The global spread of multilingual communities in medical documentation poses unique challenges, particularly regarding maintaining accuracy and consistency across diverse languages. Inspired Spine Smart Universal Resource Identifier (SURI), an adaptive artificial intelligence (AI) framework, addresses these challenges by transforming multilingual speech into structured medical reports. Utilizing state-of-the-art automatic speech recognition (ASR) and natural language processing (NLP) technologies, SURI converts doctor-patient dialogues into detailed clinical documentation. This paper presents SURI's development, focusing on its multilingual capabilities, effective report generation, and continuous improvement through real-time feedback. Our evaluation indicates a 60% reduction in documentation errors and a 70% decrease in time spent on medical reporting compared to traditional methods. SURI not only provides a practical solution to a pressing issue in healthcare but also sets a benchmark for integrating AI into medical communication workflows.
Keywords: artificial intelligence in medicine; dictation software; language interpretation services; large language model(llm); multilingual; patient encounter.
Copyright © 2025, Zhan et al.
Conflict of interest statement
Human subjects: All authors have confirmed that this study did not involve human participants or tissue. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
Figures





Similar articles
-
Artificial Intelligence in Multilingual Interpretation and Radiology Assessment for Clinical Language Evaluation (AI-MIRACLE).J Pers Med. 2024 Aug 30;14(9):923. doi: 10.3390/jpm14090923. J Pers Med. 2024. PMID: 39338177 Free PMC article.
-
Overcoming language barriers in pediatric care: a multilingual, AI-driven curriculum for global healthcare education.Front Public Health. 2024 Feb 22;12:1337395. doi: 10.3389/fpubh.2024.1337395. eCollection 2024. Front Public Health. 2024. PMID: 38454985 Free PMC article.
-
Machine learning tools match physician accuracy in multilingual text annotation.Sci Rep. 2025 Feb 14;15(1):5487. doi: 10.1038/s41598-025-89754-y. Sci Rep. 2025. PMID: 39952998 Free PMC article.
-
Integrating artificial intelligence and natural language processing for computer-assisted reporting and report understanding in nuclear cardiology.J Nucl Cardiol. 2023 Jun;30(3):1180-1190. doi: 10.1007/s12350-022-02996-5. Epub 2022 Jun 20. J Nucl Cardiol. 2023. PMID: 35725887 Review.
-
Utilizing Large Language Models in Ophthalmology: The Current Landscape and Challenges.Ophthalmol Ther. 2024 Oct;13(10):2543-2558. doi: 10.1007/s40123-024-01018-6. Epub 2024 Aug 24. Ophthalmol Ther. 2024. PMID: 39180701 Free PMC article. Review.
References
-
- High-precision medical speech recognition through synthetic data and semantic correction: UNITED-MEDASR. Banerjee S, Agarwal A, Ghosh P. https://arxiv.org/abs/2412.00055 Cor Uni. 2024
-
- Automatic recognition of spoken digits. Davis KH, Biddulph R, Balashek S. https://pubs.aip.org/asa/jasa/article-abstract/24/6/637/618458/Automatic... Jr Aco Soc Amr. 1952;24:637–642.
-
- Trainable grammars for speech recognition. Baker JK. https://pubs.aip.org/asa/jasa/article/65/S1/S132/739840/Trainable-gramma... IEEE. 1975;23:24–29.
-
- Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. Hinton G, Deng Li, Yu Dong, et al. https://ieeexplore.ieee.org/document/6296526 IEEE. 2012;29:82–97.
LinkOut - more resources
Full Text Sources