MRI classification using semantic random forest with auto-context model
- PMID: 34888187
- PMCID: PMC8611460
- DOI: 10.21037/qims-20-1114
MRI classification using semantic random forest with auto-context model
Abstract
Background: It is challenging to differentiate air and bone on MR images of conventional sequences due to their low contrast. We propose to combine semantic feature extraction under auto-context manner into random forest to improve reasonability of the MRI segmentation for MRI-based radiotherapy treatment planning or PET attention correction.
Methods: We applied a semantic classification random forest (SCRF) method which consists of a training stage and a segmentation stage. In the training stage, patch-based MRI features were extracted from registered MRI-CT training images, and the most informative elements were selected via feature selection to train an initial random forest. The rest sequence of random forests was trained by a combination of MRI feature and semantic feature under an auto-context manner. During segmentation, the MRI patches were first fed into these random forests to derive patch-based segmentation. By using patch fusion, the final end-to-end segmentation was obtained.
Results: The Dice similarity coefficient (DSC) for air, bone and soft tissue classes obtained via proposed method were 0.976±0.007, 0.819±0.050 and 0.932±0.031, compared to 0.916±0.099, 0.673±0.151 and 0.830±0.083 with random forest (RF), and 0.942±0.086, 0.791±0.046 and 0.917±0.033 with U-Net. SCRF also outperformed the competing methods in sensitivity and specificity for all three structure types.
Conclusions: The proposed method accurately segmented bone, air and soft tissue. It is promising in facilitating advanced MR application in diagnosis and therapy.
Keywords: MRI segmentation; auto-context; semantic classification random forest (SCRF).
2021 Quantitative Imaging in Medicine and Surgery. All rights reserved.
Conflict of interest statement
Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at http://dx.doi.org/10.21037/qims-20-1114). The special issue “Artificial Intelligence for Image-guided Radiation Therapy” was commissioned by the editorial office without any funding or sponsorship. All authors report this research was supported in part by the National Cancer Institute of the National Institutes of Health Award Number R01CA215718. The authors have no other conflicts of interest to declare.
Figures








References
-
- Wang T, Hedrick S, Lei Y, Liu T, Jiang X, Dhabaan A, Tang X, Curran W, McDonald M, Yang X. Dosimetric Study of Cone-Beam CT in Adaptive Proton Therapy for Brain Cancer. Med Phys 2018;45:E259.
-
- Yang X, Lei Y, Wang T, Patel P, Dhabaan A, Jiang X, Shim H, Mao H, Curran W, Jani A. Patient-Specific Synthetic CT Generation for MRI-Only Prostate Radiotherapy Treatment Planning. Med Phys 2018;45:E701-E702.
Grants and funding
LinkOut - more resources
Full Text Sources