Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 May 19;20(5):e0316043.
doi: 10.1371/journal.pone.0316043. eCollection 2025.

Transfer learning in ECG diagnosis: Is it effective?

Affiliations

Transfer learning in ECG diagnosis: Is it effective?

Cuong V Nguyen et al. PLoS One. .

Abstract

The adoption of deep learning in ECG diagnosis is often hindered by the scarcity of large, well-labeled datasets in real-world scenarios, leading to the use of transfer learning to leverage features learned from larger datasets. Yet the prevailing assumption that transfer learning consistently outperforms training from scratch has never been systematically validated. In this study, we conduct the first extensive empirical study on the effectiveness of transfer learning in multi-label ECG classification, by investigating comparing the fine-tuning performance with that of training from scratch, covering a variety of ECG datasets and deep neural networks. Firstly, We confirm that fine-tuning is the preferable choice for small downstream datasets; however, it does not necessarily improve performance. Secondly, the improvement from fine-tuning declines when the downstream dataset grows. With a sufficiently large dataset, training from scratch can achieve comparable performance, albeit requiring a longer training time to catch up. Thirdly, fine-tuning can accelerate convergence, resulting in faster training process and lower computing cost. Finally, we find that transfer learning exhibits better compatibility with convolutional neural networks than with recurrent neural networks, which are the two most prevalent architectures for time-series ECG applications. Our results underscore the importance of transfer learning in ECG diagnosis, yet depending on the amount of available data, researchers may opt not to use it, considering the non-negligible cost associated with pre-training.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Experiment flowchart.
Fig 2
Fig 2. Performance comparison of fine-tuning and training from scratch, with three upstream datasets, six models, and four downstream datasets.
In each chart, six symbols depict the average f1-scores for the respective models, and the bar shows the mean average score across these six models.
Fig 3
Fig 3. Another view of average-f1 comparison between fine-tuning (vertical axis) and training from scratch (horizontal axis).
Each point corresponds to a specific model and downstream dataset combination. Model legend is the same as in Fig 2. Best viewed in color. That the majority of points lying above the identity line suggests that fine-tuning generally outperformed training from scratch. However, this is not always true.
Fig 4
Fig 4. Fine-tuning improvement of the three ResNets with varying downstream dataset size.
Fig 5
Fig 5. Performances of ResNet1d18 during fine-tuning and training from scratch.
Three rows represent three upstream datasets: PTB-XL, CPSC2018, and Georgia, respectively.

References

    1. Ribeiro AH, Ribeiro MH, Paixão GMM, Oliveira DM, Gomes PR, Canazart JA, et al.. Automatic diagnosis of the 12-lead ECG using a deep neural network. Nat Commun. 2020;11(1):1760. doi: 10.1038/s41467-020-15432-4 - DOI - PMC - PubMed
    1. Hannun AY, Rajpurkar P, Haghpanahi M, Tison GH, Bourn C, Turakhia MP, et al.. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat Med. 2019;25(1):65–9. doi: 10.1038/s41591-018-0268-3 - DOI - PMC - PubMed
    1. Yosinski J, Clune J, Bengio Y, Lipson H. How transferable are features in deep neural networks? Adv Neural Informat Proc Syst. 2014;27.
    1. Kornblith S, Shlens J, Le Q. Do better ImageNet models transfer better? In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019. p. 2661–71.
    1. Pan S, Yang Q. A survey on transfer learning. IEEE Trans Knowl Data Eng. 2009;22(10):1345–59.

LinkOut - more resources