Self-supervised Learning: A Succinct Review
- PMID: 36713767
- PMCID: PMC9857922
- DOI: 10.1007/s11831-023-09884-2
Self-supervised Learning: A Succinct Review
Abstract
Machine learning has made significant advances in the field of image processing. The foundation of this success is supervised learning, which necessitates annotated labels generated by humans and hence learns from labelled data, whereas unsupervised learning learns from unlabeled data. Self-supervised learning (SSL) is a type of un-supervised learning that helps in the performance of downstream computer vision tasks such as object detection, image comprehension, image segmentation, and so on. It can develop generic artificial intelligence systems at a low cost using unstructured and unlabeled data. The authors of this review article have presented detailed literature on self-supervised learning as well as its applications in different domains. The primary goal of this review article is to demonstrate how images learn from their visual features using self-supervised approaches. The authors have also discussed various terms used in self-supervised learning as well as different types of learning, such as contrastive learning, transfer learning, and so on. This review article describes in detail the pipeline of self-supervised learning, including its two main phases: pretext and downstream tasks. The authors have shed light on various challenges encountered while working on self-supervised learning at the end of the article.
Keywords: Contrastive learning; Machine learning; Self-supervised; Supervised learning; Un-supervised learning.
© The Author(s) under exclusive licence to International Center for Numerical Methods in Engineering (CIMNE) 2023, Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Conflict of interest statement
Conflict of interestThe authors declare that they have no conflict of interest in this work.
Figures











References
-
- Ohri K, Kumar M. Review on self-supervised image recognition using deep neural networks. Knowl-Based Syst. 2021;224:7090. doi: 10.1016/j.knosys.2021.107090. - DOI
-
- Orhan, AE, Gupta VV, Lake BM (2007) Self-supervised learning through the eues of a child 2020, arXiv e-prints, arXiv-2007
-
- Tao L, Wang X, Yamasaki T. An improved inter-intra contrastive learning framework on self-supervised video representation. IEEE Trans Circ Syst Video Technol. 2022 doi: 10.1109/tcsvt.2022.3141051. - DOI
-
- Jaiswal A, Babu AR, Zadeh MZ, Banerjee D, Makedon F. A survey on contrastive self-supervised learning. Technologies. 2020;9(1):2. doi: 10.3390/technologies9010002. - DOI
Publication types
LinkOut - more resources
Full Text Sources