Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2024 Mar 5;8(1):26.
doi: 10.1186/s41747-024-00428-2.

Shallow and deep learning classifiers in medical image analysis

Affiliations
Review

Shallow and deep learning classifiers in medical image analysis

Francesco Prinzi et al. Eur Radiol Exp. .

Abstract

An increasingly strong connection between artificial intelligence and medicine has enabled the development of predictive models capable of supporting physicians' decision-making. Artificial intelligence encompasses much more than machine learning, which nevertheless is its most cited and used sub-branch in the last decade. Since most clinical problems can be modeled through machine learning classifiers, it is essential to discuss their main elements. This review aims to give primary educational insights on the most accessible and widely employed classifiers in radiology field, distinguishing between "shallow" learning (i.e., traditional machine learning) algorithms, including support vector machines, random forest and XGBoost, and "deep" learning architectures including convolutional neural networks and vision transformers. In addition, the paper outlines the key steps for classifiers training and highlights the differences between the most common algorithms and architectures. Although the choice of an algorithm depends on the task and dataset dealing with, general guidelines for classifier selection are proposed in relation to task analysis, dataset size, explainability requirements, and available computing resources. Considering the enormous interest in these innovative models and architectures, the problem of machine learning algorithms interpretability is finally discussed, providing a future perspective on trustworthy artificial intelligence.Relevance statement The growing synergy between artificial intelligence and medicine fosters predictive models aiding physicians. Machine learning classifiers, from shallow learning to deep learning, are offering crucial insights for the development of clinical decision support systems in healthcare. Explainability is a key feature of models that leads systems toward integration into clinical practice. Key points • Training a shallow classifier requires extracting disease-related features from region of interests (e.g., radiomics).• Deep classifiers implement automatic feature extraction and classification.• The classifier selection is based on data and computational resources availability, task, and explanation needs.

Keywords: Artificial intelligence; Deep learning; Explainable AI; Machine learning classifiers; Shallow learning.

PubMed Disclaimer

Conflict of interest statement

The authors declare that they have no competing interests.

Figures

Fig. 1
Fig. 1
Graphical representation of hard and soft margin of a support vector machine. With the soft margin, some misclassifications (double circles) are allowed
Fig. 2
Fig. 2
a The data on the x-axis are the original non-separable data. b Application of a second-degree polynomial function to make the two classes separable
Fig. 3
Fig. 3
Application of the random forest algorithm. Each decision tree in the forest calculates its own prediction: 250 trees predicted the analyzed sample as benign and 36 as malignant. It is shown that the result is the most frequent prediction made by the entire forest (benign tumor)
Fig. 4
Fig. 4
Representation of how the k-nearest neighbors algorithm works. Considering the new point to classify (?), the category is assigned based on the five nearest neighbors (k = 5). In this case, three triangles versus one circle versus one rectangle
Fig. 5
Fig. 5
Representation of the multilayer perceptron, composed of one input layer, one hidden layer, and one output layer. Each individual unit of the hidden layer and output layer is a single perceptron, represented in the box
Fig. 6
Fig. 6
Example of convolutional operation between an input image (a T1-weighted magnetic resonance image of the brain) and the Sobel filter for edge detection
Fig. 7
Fig. 7
Example of convolutional neural network architecture. The input images are fed into the convolutional and pooling layers for feature extraction. In the end, the resulting flattened feature vector is fed into the dense layer to perform the classification task

Similar articles

Cited by

References

    1. Shah SM, Khan RA, Arif S, Sajid U. Artificial intelligence for breast cancer analysis: trends & directions. Comput Biol Med. 2022;142:105221. doi: 10.1016/j.compbiomed.2022.105221. - DOI - PubMed
    1. Martin-Isla C, Campello VM, Izquierdo C, et al. Image-based cardiac diagnosis with machine learning: a review. Front Cardiovasc Med. 2020;7:1. doi: 10.3389/fcvm.2020.00001. - DOI - PMC - PubMed
    1. Liang X, Yu X, Gao T. Machine learning with magnetic resonance imaging for prediction of response to neoadjuvant chemotherapy in breast cancer: a systematic review and meta-analysis. Eur J Radiol. 2022;150:110247. doi: 10.1016/j.ejrad.2022.110247. - DOI - PubMed
    1. Hosny A, Parmar C, Quackenbush J, et al. Artificial intelligence in radiology. Nat Rev Cancer. 2018;18:500–510. doi: 10.1038/s41568-018-0016-5. - DOI - PMC - PubMed
    1. Rezazade Mehrizi MH, van Ooijen P, Homan M. Applications of artificial intelligence (AI) in diagnostic radiology: a technography study. Eur Radiol. 2021;31:1805–1811. doi: 10.1007/s00330-020-07230-9. - DOI - PMC - PubMed

LinkOut - more resources