Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Mar;26(3):1128-1139.
doi: 10.1109/JBHI.2021.3097735. Epub 2022 Mar 7.

VoxelHop: Successive Subspace Learning for ALS Disease Classification Using Structural MRI

VoxelHop: Successive Subspace Learning for ALS Disease Classification Using Structural MRI

Xiaofeng Liu et al. IEEE J Biomed Health Inform. 2022 Mar.

Abstract

Deep learning has great potential for accurate detection and classification of diseases with medical imaging data, but the performance is often limited by the number of training datasets and memory requirements. In addition, many deep learning models are considered a "black-box," thereby often limiting their adoption in clinical applications. To address this, we present a successive subspace learning model, termed VoxelHop, for accurate classification of Amyotrophic Lateral Sclerosis (ALS) using T2-weighted structural MRI data. Compared with popular convolutional neural network (CNN) architectures, VoxelHop has modular and transparent structures with fewer parameters without any backpropagation, so it is well-suited to small dataset size and 3D imaging data. Our VoxelHop has four key components, including (1) sequential expansion of near-to-far neighborhood for multi-channel 3D data; (2) subspace approximation for unsupervised dimension reduction; (3) label-assisted regression for supervised dimension reduction; and (4) concatenation of features and classification between controls and patients. Our experimental results demonstrate that our framework using a total of 20 controls and 26 patients achieves an accuracy of 93.48 % and an AUC score of 0.9394 in differentiating patients from controls, even with a relatively small number of datasets, showing its robustness and effectiveness. Our thorough evaluations also show its validity and superiority to the state-of-the-art 3D CNN classification approaches. Our framework can easily be generalized to other classification tasks using different imaging modalities.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
Illustration of the proposed multi-channel VoxelHop framework, comprising three modules. Our framework utilizes the cascaded multi-stage for local-to-global expansion, akin to the process of CNNs, which has a larger reception field in the deeper layers.
Fig. 2.
Fig. 2.
(A) A head and neck atlas and its segmentation and (B) illustration of the ROI cropping and downsampling based on the brain and tongue mask. Note that all the subjects are registered to the atlas, so all the deformation fields are in the same spatial coordinate system.
Fig. 3.
Fig. 3.
Illustration of the conventional Saab transform (top) and channel-wise Saab transform for the multi-channel 3D data (bottom).
Fig. 4.
Fig. 4.
Illustration of the neighborhood union construction in 3D space and Saab Transform for one channel of 3D data. The same operation is applied to all channels in parallel.
Fig. 5.
Fig. 5.
Illustration of the three-channel 3D VGG network based on the 3D VGG backbone [32] with the separate convolution for multi-channel processing [30], [31].
Fig. 6.
Fig. 6.
The log energy plot as a function of the number of AC filters. We plot five energy thresholds using the dots with the different color: 95% (orange), 96% (green), 97% (blue), 98% (red), and 99% (purple).
Fig. 7.
Fig. 7.
Comparison of the receiver operating characteristic curve between VoxelHop and the multi-channel 3D CNNs, including 3D VGG and 3D ResNet [32].
Fig. 8.
Fig. 8.
Sensitivity study with respect to the number of VoxelHop units.
Fig. 9.
Fig. 9.
Sensitive analysis of the cross-entropy-guided feature selection.
Fig. 10.
Fig. 10.
Sensitive analysis of using fewer training data. The vanilla training set involves 45 subjects in our leave-one-out evaluation.

References

    1. Goodfellow I, Bengio Y, Courville A, and Bengio Y, Deep learning. MIT press Cambridge, 2016, vol. 1, no. 2.
    1. Ravì D, Wong C, Deligianni F, Berthelot M, Andreu-Perez J, Lo B, and Yang G-Z, “Deep learning for health informatics,” IEEE journal of biomedical and health informatics, vol. 21, no. 1, pp. 4–21, 2016. - PubMed
    1. Shen D, Wu G, and Suk H-I, “Deep learning in medical image analysis,” Annual review of biomedical engineering, vol. 19, pp. 221–248, 2017. - PMC - PubMed
    1. Singh SP, Wang L, Gupta S, Goli H, Padmanabhan P, and Gulyás B, “3d deep learning on medical images: A review,” arXiv preprint arXiv:2004.00218, 2020. - PMC - PubMed
    1. Krizhevsky A, Sutskever I, and Hinton GE, “Imagenet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017.

Publication types

MeSH terms

LinkOut - more resources