Clinical validation of explainable AI for fetal growth scans through multi-level, cross-institutional prospective end-user evaluation
- PMID: 39820804
- PMCID: PMC11739376
- DOI: 10.1038/s41598-025-86536-4
Clinical validation of explainable AI for fetal growth scans through multi-level, cross-institutional prospective end-user evaluation
Abstract
We aimed to develop and evaluate Explainable Artificial Intelligence (XAI) for fetal ultrasound using actionable concepts as feedback to end-users, using a prospective cross-center, multi-level approach. We developed, implemented, and tested a deep-learning model for fetal growth scans using both retrospective and prospective data. We used a modified Progressive Concept Bottleneck Model with pre-established clinical concepts as explanations (feedback on image optimization and presence of anatomical landmarks) as well as segmentations (outlining anatomical landmarks). The model was evaluated prospectively by assessing the following: the model's ability to assess standard plane quality, the correctness of explanations, the clinical usefulness of explanations, and the model's ability to discriminate between different levels of expertise among clinicians. We used 9352 annotated images for model development and 100 videos for prospective evaluation. Overall classification accuracy was 96.3%. The model's performance in assessing standard plane quality was on par with that of clinicians. Agreement between model segmentations and explanations provided by expert clinicians was found in 83.3% and 74.2% of cases, respectively. A panel of clinicians evaluated segmentations as useful in 72.4% of cases and explanations as useful in 75.0% of cases. Finally, the model reliably discriminated between the performances of clinicians with different levels of experience (p- values < 0.01 for all measures) Our study has successfully developed an Explainable AI model for real-time feedback to clinicians performing fetal growth scans. This work contributes to the existing literature by addressing the gap in the clinical validation of Explainable AI models within fetal medicine, emphasizing the importance of multi-level, cross-institutional, and prospective evaluation with clinician end-users. The prospective clinical validation uncovered challenges and opportunities that could not have been anticipated if we had only focused on retrospective development and validation, such as leveraging AI to gauge operator competence in fetal ultrasound.
Keywords: Artificial intelligence, Fetal growth scans, Explainable AI, Human-AI collaboration.
© 2025. The Author(s).
Conflict of interest statement
Declarations. Competing interests: The authors declare no competing interests. Disclosure: We used AI-assisted technology (GPT, Microsoft Word, Grammarly) to enhance the linguistic clarity of the paper. Our research group bears full responsibility for the content and ensures its accuracy in this paper.
Figures
References
-
- Andreasen, L. A. et al. Why we succeed and fail in detecting fetal growth restriction: A population-based study. Acta Obstet. Gynecol. Scand.100, 893–899 (2021). - PubMed
-
- Topol, E. J. High-performance medicine: the convergence of human and artificial intelligence. Nat. Med.25, 44–56 (2019). - PubMed
-
- Bano, S. et al. (2021) AutoFB: Automating Fetal Biometry Estimation from Standard Ultrasound Planes. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics)12907 LNCS 228–238.
-
- Chen, H. et al. Ultrasound standard plane detection using a composite neural network framework. IEEE Trans. Cybern.47, 1576–1586 (2017). - PubMed
-
- Płotka, S. et al. Deep learning fetal ultrasound video model match human observers in biometric measurements. Phys. Med. Biol.67(4), 045013 (2022). - PubMed
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Medical
Miscellaneous
