Transfer learning in ECG diagnosis: Is it effective?
- PMID: 40388401
- PMCID: PMC12088039
- DOI: 10.1371/journal.pone.0316043
Transfer learning in ECG diagnosis: Is it effective?
Abstract
The adoption of deep learning in ECG diagnosis is often hindered by the scarcity of large, well-labeled datasets in real-world scenarios, leading to the use of transfer learning to leverage features learned from larger datasets. Yet the prevailing assumption that transfer learning consistently outperforms training from scratch has never been systematically validated. In this study, we conduct the first extensive empirical study on the effectiveness of transfer learning in multi-label ECG classification, by investigating comparing the fine-tuning performance with that of training from scratch, covering a variety of ECG datasets and deep neural networks. Firstly, We confirm that fine-tuning is the preferable choice for small downstream datasets; however, it does not necessarily improve performance. Secondly, the improvement from fine-tuning declines when the downstream dataset grows. With a sufficiently large dataset, training from scratch can achieve comparable performance, albeit requiring a longer training time to catch up. Thirdly, fine-tuning can accelerate convergence, resulting in faster training process and lower computing cost. Finally, we find that transfer learning exhibits better compatibility with convolutional neural networks than with recurrent neural networks, which are the two most prevalent architectures for time-series ECG applications. Our results underscore the importance of transfer learning in ECG diagnosis, yet depending on the amount of available data, researchers may opt not to use it, considering the non-negligible cost associated with pre-training.
Copyright: © 2025 Nguyen, Do. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Conflict of interest statement
The authors have declared that no competing interests exist.
Figures





References
-
- Yosinski J, Clune J, Bengio Y, Lipson H. How transferable are features in deep neural networks? Adv Neural Informat Proc Syst. 2014;27.
-
- Kornblith S, Shlens J, Le Q. Do better ImageNet models transfer better? In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019. p. 2661–71.
-
- Pan S, Yang Q. A survey on transfer learning. IEEE Trans Knowl Data Eng. 2009;22(10):1345–59.
MeSH terms
LinkOut - more resources
Full Text Sources