Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
[Preprint]. 2025 Mar 19:2025.03.18.25324215.
doi: 10.1101/2025.03.18.25324215.

Artificial intelligence automation of echocardiographic measurements

Affiliations

Artificial intelligence automation of echocardiographic measurements

Yuki Sahashi et al. medRxiv. .

Update in

Abstract

Background: Accurate measurement of echocardiographic parameters is crucial for the diagnosis of cardiovascular disease and tracking of change over time, however manual assessment is time-consuming and can be imprecise. Artificial intelligence (AI) has the potential to reduce clinician burden by automating the time-intensive task of comprehensive measurement of echocardiographic parameters.

Methods: We developed and validated open-sourced deep learning semantic segmentation models for the automated measurement of 18 anatomic and Doppler measurements in echocardiography. The outputs of segmentation models were compared to sonographer measurements from two institutions to access accuracy and precision.

Results: We utilized 877,983 echocardiographic measurements from 155,215 studies from Cedars-Sinai Medical Center (CSMC) to develop EchoNet-Measurements, an open-source deep learning model for echocardiographic annotation. The models demonstrated a good correlation when compared with sonographer measurements from held-out data from CSMC and an independent external validation dataset from Stanford Healthcare (SHC). Measurements across all nine B-mode and nine Doppler measurements had high accuracy (an overall R2 of 0.967 (0.965 - 0.970) in the held-out CSMC dataset and 0.987 (0.984 - 0.989) in the SHC dataset). When evaluated end-to-end on a temporally distinct 2,103 studies at CSMC, EchoNet-Measurements performed well an overall R2 of 0.981 (0.976 - 0.984). Performance was consistent across patient characteristics including sex and atrial fibrillation status.

Conclusion: EchoNet-Measurement achieves high accuracy in automated echocardiographic measurement that is comparable to expert sonographers. This open-source model provides the foundation for future developments in AI applied to echocardiography.

Keywords: Convolutional neural network; Deep Learning; Doppler wave; Echocardiography; automated measurement.

PubMed Disclaimer

Figures

Figure 1:
Figure 1:
Overview of the study pipeline. The pipeline of the developed automated echocardiographic parameters measurement model includes two main groups: linear measurements (e.g., left atrium diameter and intraventricular septum) and Doppler measurements (e.g., tricuspid regurgitation peak velocity and septal e’ velocity). Evaluation of EchoNet-Measurements was performed on held-out test cohorts (CSMC) and external dataset (SHC), and the model demonstrated accuracy comparable to sonographer annotations. LA: left atrium; TR Vmax: tricuspid regurgitation maximum velocity.
Figure 2:
Figure 2:
Model Performance and Agreement between Deep Learning Model and Sonographer Annotations for Echocardiographic Measurements in the CSMC test dataset (A and C) Scatter plot comparing deep learning model predictions with sonographer annotations for nine linear echocardiographic parameters (A) and Doppler echocardiography parameters (C). Coefficient of determination (R²), intraclass correlation coefficient (ICC) and mean absolute error (MAE) are described in the legend. (B and D) Bland-Altman plots for each parameter in the linear measurement group (B) and Doppler echocardiography parameters (D), displaying the difference between model predictions and sonographer measurements (y-axis) against the mean of the two measurements (x-axis). Each plot includes the mean bias (red dashed line) and limits of agreement (±1.96 SD, gray dashed lines). For a detailed explanation of echocardiography parameter abbreviations, refer to Table 1.
Figure 3:
Figure 3:
Representative Figures of Comparison between Sonographer and Deep Learning Model Measurements for Echocardiographic Linear Measurement Parameters The figure showing a comparison of measurements made by a sonographer (in red) and predictions from the deep learning (DL) model (in light blue) across nine echocardiographic parameters. For a detailed explanation of echocardiography parameter abbreviations, refer to Table 1.
Figure 4:
Figure 4:
Representative Figures of Comparison between Sonographer and Deep Learning Model Measurements for Echocardiographic Doppler Measurement Parameters Comparison of measurements made by a sonographer (in white) and predictions from the deep learning (DL) model (in light blue) across nine echocardiographic Doppler and M-mode parameters (TAPSE). For a detailed explanation of echocardiography parameter abbreviations, refer to Table 1. For Peak E velocity and E/A, deep-learning based annotation on peak E velocity is shown in light blue dot and peak A velocity is shown in green dot.
Figure 5:
Figure 5:
Model Performance and Agreement between Deep Learning Model and Sonographer Annotations for Echocardiographic Measurements in the SHC external test dataset and CSMC temporal split dataset (A and C) Scatter plots comparing deep learning model predictions with sonographer annotations for (A) linear parameters and (C)Doppler echocardiography parameters. (B and D) Bland-Altman plots for each parameter in the linear measurement group (B) and Doppler echocardiography parameters (D). Data from SHC are described as square dots and in the CSMC temporal split data are shown in triangle dots in all figures. All metrics including coefficient of determination (R2), intraclass correlation coefficients (ICC), mean absolute error (MAE), bias and limits of agreement are described in Supplemental Tables 1 and 2.

References

    1. Anderson DR, Blissett S, O’Sullivan P, Qasim A. Differences in echocardiography interpretation techniques among trainees and expert readers. J Echocardiogr. 2021;19(4):222–231. - PMC - PubMed
    1. Nagata Y, Kado Y, Onoue T, et al. Impact of image quality on reliability of the measurements of left ventricular systolic function and global longitudinal strain in 2D echocardiography. Echo Res Pract. 2018;5(1):27–39. - PMC - PubMed
    1. Virnig BA, Shippee ND, O’Donnell B, Zeglin J, Parashuram S. Trends in the Use of Echocardiography, 2007 to 2011. Agency for Healthcare Research and Quality (US); 2014. - PubMed
    1. Christensen M, Vukadinovic M, Yuan N, Ouyang D. Vision-language foundation model for echocardiogram interpretation. Nat Med. 2024;30(5):1481–1488. - PMC - PubMed
    1. Xu H, Usuyama N, Bagga J, et al. A whole-slide foundation model for digital pathology from real-world data. Nature. 2024;630(8015):181–188. - PMC - PubMed

Publication types

LinkOut - more resources