Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2019 May:108:354-370.
doi: 10.1016/j.compbiomed.2019.02.017. Epub 2019 Feb 27.

Radiological images and machine learning: Trends, perspectives, and prospects

Affiliations
Review

Radiological images and machine learning: Trends, perspectives, and prospects

Zhenwei Zhang et al. Comput Biol Med. 2019 May.

Abstract

The application of machine learning to radiological images is an increasingly active research area that is expected to grow in the next five to ten years. Recent advances in machine learning have the potential to recognize and classify complex patterns from different radiological imaging modalities such as x-rays, computed tomography, magnetic resonance imaging and positron emission tomography imaging. In many applications, machine learning based systems have shown comparable performance to human decision-making. The applications of machine learning are the key ingredients of future clinical decision making and monitoring systems. This review covers the fundamental concepts behind various machine learning techniques and their applications in several radiological imaging areas, such as medical image segmentation, brain function studies and neurological disease diagnosis, as well as computer-aided systems, image registration, and content-based image retrieval systems. Synchronistically, we will briefly discuss current challenges and future directions regarding the application of machine learning in radiological imaging. By giving insight on how take advantage of machine learning powered applications, we expect that clinicians can prevent and diagnose diseases more accurately and efficiently.

Keywords: Deep learning; Deep neural network; Imaging modalities; Machine learning.

PubMed Disclaimer

Conflict of interest statement

Conflicts of Interest

None declared.

Figures

Figure 1:
Figure 1:
An example of CT (a), MRI (b) and ultrasound (c) images displaying brain structures. Soft tissue has a better resolution in MRI images. Each types of MRI sequence displays a different brightness for the same structures [21]. Ultrasound is more convenient than CT and MRI, however it is unable to capture information well, as ultrasound waves do not transmit well through bone [22].
Figure 2:
Figure 2:
An example for PET imaging of breast cancer. (A) axial views of a CT scan, (B) [18F]FDG PET scan, (C) combined PET/CT scan, (d) a full body [18F]FDG PET scan. CT images show better resolution than PET images. However, each type of image can provide useful information for diseases. In this case, the intense uptake of [18F]FDG PET in the soft tissue lesion in the right breast confirmed the indication of breast cancer [25]
Figure 3:
Figure 3:
Basic idea of linear classification and non-linear classification, (a) linear case (b) non linear case. The linear model uses linear functions to separate the data yet is not suitable for non-linear cases. SVM is one way to separate non-linear models using different kernel functions.
Figure 4:
Figure 4:
A medical example of decision trees. In this example, patients are classified into two classes: high risk and low risk. The features include blood pressures, age, etc. In this case, the classification tree operates similarly to a clinician’s examination process.
Figure 5:
Figure 5:
The concept of ensemble learning: an ensemble classifier is made up of several sub-classifiers, the final output is combined with outputs from these sub classifiers and their weights.
Figure 6:
Figure 6:
The Dice similarity coefficient represents spatial overlap.
Figure 7:
Figure 7:
ROC curves consist of the points evaluated from model many times with different classification thresholds. AUC computes the area beneath the ROC curves, which is more efficient to evaluate the models compared to ROC curve.
Figure 8:
Figure 8:
Instead of standard random forest, Laplacian Forests use guided bagging by creating subtrees with neighboring images on the Laplacian eigenmap. If the black cross is the test image, only neighboring trees are required for a test image [100].
Figure 9:
Figure 9:
Sedai et al. proposed a shape regression method for right ventricle segmentation [104]. Their method more accurately segmented the right ventricle and outperformed the multi-atlas label fusion method. The yellow contour is automatic segmentation and the red contour is manual segmentation.
Figure 10:
Figure 10:
Salvatore et al. [24] proposed a supervised learning method to identify PD and PSP using MR images. The figures show maps of voxel-based pattern distribution of brain structural differences. The color scale expresses the importance of each voxel in SVM classification.
Figure 11:
Figure 11:
Flow chart of the hierarchical classification algorithm proposed in [175], the low-level classifiers are used to transform imaging and spatial-correlation features from the local patch, and the output of these low-level classifiers is integrated into high-level classifiers with coarse-scale imaging features. The final classification is achieved by ensemble outputs from high-level classifiers.
Figure 12:
Figure 12:
The method for retrieving images using Local wavelet pattern features and similarity measurement. All retrieved images are from the same category, achieving 100 % precision in this example [203]: (a) Query image. (b) Top 10 retrieved images.
Figure 13:
Figure 13:
A new method using a regression forest based framework to predict standard-dose PET images [214]. The figures compare their new method and sparse representation method on two different subjects in the first and second rows. The new method outperforms the sparse technique in this comparison.

References

    1. Novelline RA and Squire LF, Squire’s fundamentals of radiology. La Editorial, UPR, 2004.
    1. Chen M, Pope T, and Ott D, Basic radiology. McGraw Hill Professional, 2010.
    1. Herring W, Learning radiology: Recognizing the basics. Elsevier Health Sciences, 2015.
    1. Swensen SJ, Jett JR, Hartman TE, Midthun DE, Mandrekar SJ, Hillman SL, Sykes A.-m., Aughenbaugh GL, Bungum AO, and Allen KL, “Radiology CT screening for lung cancer : five-year prospective,” Cancer, pp. 259–265, 2005. - PubMed
    1. Iyer VR and Lee SI, “MRI, CT, and PET/CT for ovarian cancer detection and adnexal lesion characterization,” American Journal of Roentgenology, vol. 194, no. 2, pp. 311–321, 2010. - PubMed

Publication types