Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Jun;35(3):482-495.
doi: 10.1007/s10278-022-00583-1. Epub 2022 Feb 9.

Highly Efficient and Accurate Deep Learning-Based Classification of MRI Contrast on a CPU and GPU

Affiliations

Highly Efficient and Accurate Deep Learning-Based Classification of MRI Contrast on a CPU and GPU

Neville D Gai. J Digit Imaging. 2022 Jun.

Abstract

Classifying MR images based on their contrast mechanism can be useful in image segmentation where additional information from different contrast mechanisms can improve intensity-based segmentation and help separate the class distributions. In addition, automated processing of image type can be beneficial in archive management, image retrieval, and staff training. Different clinics and scanners have their own image labeling scheme, resulting in ambiguity when sorting images. Manual sorting of thousands of images would be a laborious task and prone to error. In this work, we used the power of transfer learning to modify pretrained residual convolution neural networks to classify MRI images based on their contrast mechanisms. Training and validation were performed on a total of 5169 images belonging to 10 different classes and from different MRI vendors and field strengths. Time for training and validation was 36 min. Testing was performed on a different data set with 2474 images. Percentage of correctly classified images (accuracy) was 99.76%. (A deeper version of the residual network was trained for 103 min and showed slightly lower accuracy of 99.68%.) In consideration of model deployment in the real world, performance on a single CPU computer was compared with GPU implementation. Highly accurate classification, training, and testing can be achieved without use of a GPU in a relatively short training time, through proper choice of a convolutional neural network and hyperparameters, making it feasible to improve accuracy by repeated training with cumulative training sets. Techniques to improve accuracy further are discussed and demonstrated. Derived heatmaps indicate areas of image used in decision making and correspond well with expert human perception. The methods used can be easily extended to other classification tasks with minimal changes.

Keywords: Contrast Mechanism; Deep learning; Fast training; GPU/CPU; MRI classification; Residual network; Transfer learning.

PubMed Disclaimer

Conflict of interest statement

The author declares no competing interests.

Figures

Fig. 1
Fig. 1
A 71-layer model based on ResNet18 was used for the classification task. The directed acyclic graph nature and skipped connections are apparent
Fig. 2
Fig. 2
Histogram of training and validation data with 10 labeled classes (5169 total images) corresponding to contrast mechanisms found in the contrast study sets. (ax – axial, sag – sagittal; -post refers to images obtained post contrast injection). T1-ax, T1-sag refer to T1-w images obtained using a spin-echo or fast spin-echo sequence, while T1spgr refers to T1-w images obtained using a spoiled gradient echo-based sequence. Note the unbalanced nature of the image distribution
Fig. 3
Fig. 3
Sample axial post-contrast T2Flair images from each of the 14 subjects indicating a variety of pathologies in several of the exam sets
Fig. 4
Fig. 4
Example of the coverage used for training and testing sets. Every fifth image of a post-contrast sagittal T1 SPGR set and of a T2-weighted axial set showing that only sections with non-negligible brain tissue were considered for training and testing sets
Fig. 5
Fig. 5
Training accuracy and loss as a function of iterations (and epochs) for the 71-layer ResNet. The final validation accuracy was 99.36% reached in 36 min 24 s
Fig. 6
Fig. 6
Histogram of the testing data set. A total of 2474 images from independent studies were used to determine accuracy of classification
Fig. 7
Fig. 7
Sample sections of post-contrast T2Flair images from the testing data set obtained from different exams. Majority of the exams showed pathology or artifacts
Fig. 8
Fig. 8
A post-contrast T2Flair image (A1) along with its heatmap (A2) which was misidentified as a T2-weighted image by Network 1. Corresponding T2-weighted image (B1) along with the heatmap (B2). (C) Heatmap obtained by a network which correctly identified A1 as a post-T2 FLAIR image
Fig. 9
Fig. 9
A post-contrast T2Flair image (A1) along with its heatmap (A2) which was misidentified as a T1-weighted image by Network 1. Corresponding T1-weighted image (B1) along with the heatmap (B2). (C) Heatmap obtained by Network 3 which correctly identified A1
Fig. 10
Fig. 10
A post-contrast T2Flair image (A1) along with its heatmap (A2) which was misidentified as a T2-weighted image by Network 1. Corresponding T2-weighted image (B1) along with the heatmap (B2). (C) Heatmap obtained by Network 3 which correctly identified A1
Fig. 11
Fig. 11
A post-contrast T1 image (A1) along with its heatmap (A2) which was misidentified as a T1-weighted image by Network 1. Corresponding T1-weighted image (B1) along with the heatmap (B2). (C) Heatmap obtained by Network 3 which correctly identified A1
Fig. 12
Fig. 12
Examples of images misclassified by Network 2. Heatmap of a post-contrast T1 image (A) which was misidentified as a T1-weighted image. B Corresponding T1-weighted image. C Heatmap of pre-contrast T1 image misclassified as post-contrast T1 image. D Corresponding post-T1 image and E C correctly classified by Network 3

References

    1. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–444. doi: 10.1038/nature14539. - DOI - PubMed
    1. Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L: Imagenet: a large-scale hierarchical image database. In: IEEE Conference onComputer Vision and Pattern Recognition, CVPR 2009: 248--255.
    1. Krizhevsky A, Sutskever I, Hinton GE: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems 25. edn. Edited by Pereira F, Burges CJC, Bottou L, Weinberger KQ: Curran Associates, Inc.; 2012: 1097--1105.
    1. Simonyan K, Zisserman A: very deep convolutional networks for large-scale image recognition. In.; 2014: arXiv:1409.1556.
    1. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A: going deeper with convolutions. In.; 2014: arXiv:1409.4842.

Publication types

LinkOut - more resources