Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Mar 30;13(7):1590.
doi: 10.3390/cancers13071590.

Novel Transfer Learning Approach for Medical Imaging with Limited Labeled Data

Affiliations

Novel Transfer Learning Approach for Medical Imaging with Limited Labeled Data

Laith Alzubaidi et al. Cancers (Basel). .

Abstract

Deep learning requires a large amount of data to perform well. However, the field of medical image analysis suffers from a lack of sufficient data for training deep learning models. Moreover, medical images require manual labeling, usually provided by human annotators coming from various backgrounds. More importantly, the annotation process is time-consuming, expensive, and prone to errors. Transfer learning was introduced to reduce the need for the annotation process by transferring the deep learning models with knowledge from a previous task and then by fine-tuning them on a relatively small dataset of the current task. Most of the methods of medical image classification employ transfer learning from pretrained models, e.g., ImageNet, which has been proven to be ineffective. This is due to the mismatch in learned features between the natural image, e.g., ImageNet, and medical images. Additionally, it results in the utilization of deeply elaborated models. In this paper, we propose a novel transfer learning approach to overcome the previous drawbacks by means of training the deep learning model on large unlabeled medical image datasets and by next transferring the knowledge to train the deep learning model on the small amount of labeled medical images. Additionally, we propose a new deep convolutional neural network (DCNN) model that combines recent advancements in the field. We conducted several experiments on two challenging medical imaging scenarios dealing with skin and breast cancer classification tasks. According to the reported results, it has been empirically proven that the proposed approach can significantly improve the performance of both classification scenarios. In terms of skin cancer, the proposed model achieved an F1-score value of 89.09% when trained from scratch and 98.53% with the proposed approach. Secondly, it achieved an accuracy value of 85.29% and 97.51%, respectively, when trained from scratch and using the proposed approach in the case of the breast cancer scenario. Finally, we concluded that our method can possibly be applied to many medical imaging problems in which a substantial amount of unlabeled image data is available and the labeled image data is limited. Moreover, it can be utilized to improve the performance of medical imaging tasks in the same domain. To do so, we used the pretrained skin cancer model to train on feet skin to classify them into two classes-either normal or abnormal (diabetic foot ulcer (DFU)). It achieved an F1-score value of 86.0% when trained from scratch, 96.25% using transfer learning, and 99.25% using double-transfer learning.

Keywords: convolution neural network (CNN); deep learning; machine learning; medical image analysis; transfer learning.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
The difference in transfer learning (TL) between natural and medical images.
Figure 2
Figure 2
The workflow of the proposed approach.
Figure 3
Figure 3
Samples of source domain dataset of skin cancer.
Figure 4
Figure 4
Samples of the source domain dataset of breast cancer.
Figure 5
Figure 5
Samples of the target domain dataset of skin cancer.
Figure 6
Figure 6
Samples of target domain dataset of breast cancer.
Figure 7
Figure 7
The architecture of the proposed model.
Figure 8
Figure 8
Learned filters from the first convolution layer of the model trained on the skin cancer datasets, single filter of single image. The color image is the original, and the gray-scale image is the filter.
Figure 9
Figure 9
Learned filters from first convolution layer of the model trained on the ICIAR-2018 dataset [58], multiple filters of multiple images.
Figure 10
Figure 10
Learned filters from first convolution layer of the model, single filter of single image. The color image is the original, and the gray-scale image is the filter.
Figure 11
Figure 11
Learned filters from first convolution layer of the model trained on the DFU dataset [59], Single filter of single image. The color image is the original, and the gray-scale image is the filter.
Figure 12
Figure 12
Double-Transfer learning technique with the diabetic foot ulcer (DFU) task.
Figure 13
Figure 13
The procedure of evaluation of the breast cancer task.
Figure 14
Figure 14
Some samples of prediction of the DFU test set.

Similar articles

Cited by

References

    1. Valieris R., Amaro L., Osório C.A.B.T., Bueno A.P., Rosales Mitrowsky R.A., Carraro D.M., Nunes D.N., Dias-Neto E., da Silva I.T. Deep Learning Predicts Underlying Features on Pathology Images with Therapeutic Relevance for Breast and Gastric Cancer. Cancers. 2020;12:3687. doi: 10.3390/cancers12123687. - DOI - PMC - PubMed
    1. Liu Y., Jain A., Eng C., Way D.H., Lee K., Bui P., Kanada K., de Oliveira Marinho G., Gallegos J., Gabriele S., et al. A deep learning system for differential diagnosis of skin diseases. Nat. Med. 2020;26:900–908. doi: 10.1038/s41591-020-0842-3. - DOI - PubMed
    1. Hamamoto R., Suvarna K., Yamada M., Kobayashi K., Shinkai N., Miyake M., Takahashi M., Jinnai S., Shimoyama R., Sakai A., et al. Application of artificial intelligence technology in oncology: Towards the establishment of precision medicine. Cancers. 2020;12:3532. doi: 10.3390/cancers12123532. - DOI - PMC - PubMed
    1. Rajpurkar P., Irvin J., Zhu K., Yang B., Mehta H., Duan T., Ding D., Bagul A., Langlotz C., Shpanskaya K., et al. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv. 20171711.05225
    1. Nazir T., Irtaza A., Javed A., Malik H., Hussain D., Naqvi R.A. Retinal Image Analysis for Diabetes-Based Eye Disease Detection Using Deep Learning. Appl. Sci. 2020;10:6185. doi: 10.3390/app10186185. - DOI