Specialized AI and neurosurgeons in niche expertise: a proof-of-concept in neuromodulation with vagus nerve stimulation
- PMID: 40711622
- PMCID: PMC12296768
- DOI: 10.1007/s00701-025-06610-8
Specialized AI and neurosurgeons in niche expertise: a proof-of-concept in neuromodulation with vagus nerve stimulation
Erratum in
-
Correction to: Specialized AI and neurosurgeons in niche expertise: a proof‑of‑concept in neuromodulation with vagus nerve stimulation.Acta Neurochir (Wien). 2025 Aug 21;167(1):224. doi: 10.1007/s00701-025-06649-7. Acta Neurochir (Wien). 2025. PMID: 40836120 Free PMC article. No abstract available.
Abstract
Objective: Applying large language models (LLM) in specialized medical disciplines presents unique challenges requiring precision, reliability, and domain-specific relevance. We evaluated a specialized LLM-driven system against neurosurgeons in vagus nerve stimulation (VNS) for drug-resistant epilepsy knowledge assessment-a complex neuromodulation therapy requiring transdisciplinary expertise in neural anatomy, epileptic disorders, and device technology.
Materials and methods: Thirty-six European neurosurgeons who completed a 2-day VNS masterclass were assessed using a multiple-choice questionnaire comprising 14 items with 67 binary propositions. We deployed open-source models-LLaMa 2 70B and MXBAI embedding model-using Neura, an AI infrastructure enabling transparent grounding through advanced retrieval augmented generation. The knowledge base consisted of 125 VNS-related publications curated by multidisciplinary faculty. Scoring ranged from -1 to + 1 per question. Performance was analyzed using Wilcoxon signed-rank tests, confusion matrices, and metrics including accuracy, precision, recall, and specificity.
Results: The AI achieved a score of 0.75, exceeding the highest individual clinician score (0.68; median: 0.50), with statistical significance (p < 0.001). AI performed better in questions involving anatomical and technical information, while clinicians excelled in scenarios requiring practical judgment. Confusion matrices revealed higher true correct and true incorrect rates for AI, demonstrating perfect precision and specificity scores with no hallucinations detected.
Conclusions: Specialized LLM performance in this VNS knowledge assessment, coupled with its verifiability, points to promising applications across neurosurgical subspecialties for clinical decision support and education. The complementary strengths observed suggest that valuable implementations will emerge from synergistic approaches combining human experiential knowledge with AI's information processing capabilities across the broader field of neurosurgery.
Keywords: Artificial intelligence; Epilepsy; Neuromodulation; VNS; Vagus nerve stimulation.
© 2025. The Author(s).
Conflict of interest statement
Declarations. Human ethics and consent to participate: This study did not involve human subjects requiring ethical approval, as it focused on [brief description of study methodology—e.g., educational assessment, device evaluation, etc.] without direct patient involvement. Consent to participate: No patient consent was required for this study as no patients were directly involved in the research. The study participants were physicians attending a professional masterclass setting, and their participation was voluntary and professional in nature. Approval committee/internal review board (IRB): Given the nature of this study, which did not involve human subjects research requiring ethical oversight, IRB approval was waived. Competing interests: Conflict of Interest Statement: The authors Giovanni Ranuzzi, Steffen Fetzer, Julieta O'Flaherty, and Maxine Dibué are employees of LivaNova PLC, which funded the study. Sami Barrit and Romain Carron serve as proctors and consultants for LivaNova PLC. Sami Barrit, Mejdeddine Al Barajraji, and Salim El Hadwe are affiliated with Sciense, an organization focused on open-source, decentralized science initiatives. Sciense, through its nonprofit foundation Consciense, provided computational infrastructure and technical support for the AI system implementation as part of its dedicated AI research support program. The authors would like to acknowledge any potential conflicts related to their involvement in developing the AI-driven platform used in this study. The study was conducted with independence of scientific inquiry, though readers should consider these relationships when evaluating the findings. All data analysis and interpretation were performed independently of commercial interests.
Figures
References
-
- Ali R, Tang OY, Connolly ID, Zadnik Sullivan PL, Shin JH, Fridley JS et al (2023) Performance of ChatGPT and GPT-4 on neurosurgery written board examinations. Neurosurgery 93(6):1353–1365 - PubMed
-
- Amershi S, Weld D, Vorvoreanu M et al (2019) Guidelines for human-AI interaction. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. CHI ’19. Association for Computing Machinery, pp 1–13. 10.1145/3290605.3300233
-
- Beam AL, Drazen JM, Kohane IS, Leong TY, Manrai AK, Rubin EJ (2023) Artificial intelligence in medicine. N Engl J Med 388(13):1220–1221. 10.1056/NEJMe2206291 - PubMed
-
- Brown NJ, Weiss MJ, Singh R et al (2024) ChatGPT as a decision-support tool in the management of Chiari I malformation: a comparison to 2023 CNS guidelines. World Neurosurg 188:e234–e242 - PubMed
MeSH terms
LinkOut - more resources
Full Text Sources
