Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 May:405:110078.
doi: 10.1016/j.jneumeth.2024.110078. Epub 2024 Feb 8.

Fully automated whole brain segmentation from rat MRI scans with a convolutional neural network

Affiliations

Fully automated whole brain segmentation from rat MRI scans with a convolutional neural network

Valerie A Porter et al. J Neurosci Methods. 2024 May.

Abstract

Background: Whole brain delineation (WBD) is utilized in neuroimaging analysis for data preprocessing and deriving whole brain image metrics. Current automated WBD techniques for analysis of preclinical brain MRI data show limited accuracy when images present with significant neuropathology and anatomical deformations, such as that resulting from organophosphate intoxication (OPI) and Alzheimer's Disease (AD), and inadequate generalizability.

Methods: A modified 2D U-Net framework was employed for WBD of MRI rodent brains, consisting of 27 convolutional layers, batch normalization, two dropout layers and data augmentation, after training parameter optimization. A total of 265 T2-weighted 7.0 T MRI scans were utilized for the study, including 125 scans of an OPI rat model for neural network training. For testing and validation, 20 OPI rat scans and 120 scans of an AD rat model were utilized. U-Net performance was evaluated using Dice coefficients (DC) and Hausdorff distances (HD) between the U-Net-generated and manually segmented WBDs.

Results: The U-Net achieved a DC (median[range]) of 0.984[0.936-0.990] and HD of 1.69[1.01-6.78] mm for OPI rat model scans, and a DC (mean[range]) of 0.975[0.898-0.991] and HD of 1.49[0.86-3.89] for the AD rat model scans.

Comparison with existing methods: The proposed approach is fully automated and robust across two rat strains and longitudinal brain changes with a computational speed of 8 seconds/scan, overcoming limitations of manual segmentation.

Conclusions: The modified 2D U-Net provided a fully automated, efficient, and generalizable segmentation approach that achieved high accuracy across two disparate rat models of neurological diseases.

Keywords: Automated Segmentation; MRI; Machine Learning; Preclinical Neuroimaging; Rodent Brain Imaging; Skull Stripping.

PubMed Disclaimer

Conflict of interest statement

Declaration of Competing Interest None of the authors have a conflict of interest with the work presented.

Figures

Fig. 1. :
Fig. 1. :
Schematic illustrating the experimental study design of the rat models of: (A) OPI, and (B) AD. (A) OPI paradigm, where DFP is administered to each animal, followed by the initial rescue therapy, atropine and 2-PAM, 1 minute later. The therapy (MDZ, ALO, or DUO) is administered 40 minutes post- injection of DFP. T2-weighted MRI scans are captured at each timepoint (3-, 7-, and 28-days post-exposure); 55 unique rats were imaged. (B) The AD rat model was imaged with T2-weighted MRI at 7, 9, 11, and 13 months of age; 48 unique rats were imaged. At each timepoint, six animals from each group, transgenic (TG) vesus wildtype (WT), were euthenized for histology. The tables below the imaging timelines indicate the number scans captured by timepoint and group.
Fig. 2. :
Fig. 2. :
Architecture of the segmentation pipeline utilized; Each blue box is an image volume, where the x- and y-dimensions are denoted in the lower left of the box, and the number of slices (z-dimension) is denoted above the box. The arrows indicate different operations in the pipeline and the order in which operations are applied to the image. For the modified 2D U-Net architecture, each slice of the scan is processed through the neural network individually, and the z-axis indicates the number of feature maps generated. The light blue boxes represent concatenated feature maps from previous layers. Post-processing converts the U-Net-generated segmentation map to the original size of the input scan.
Fig. 3. :
Fig. 3. :
Box and whisker plots of (A) true positive rate (TPR) and (B) false positive rate (FPR) at different training dataset sizes (TDS). A TDS=100 produced the lowest median FPR without decreasing the median TPR. Median, first and third interquartile ranges are shown. Error bars indicate min-max range of data.
Fig. 4. :
Fig. 4. :
Box and whisker plots of (A) true positive rate (TPR) and (B) false positive rate (FPR) at different learning rates (LR). An LR of 2×10−4 produced the highest TPR and the lowest FPR. An LR of 2×10−3 did not produce segmentations, so no TPR or FPR values could be calculated. Boxes denote the median, first and third interquartile ranges; whiskers denote range of data.
Fig. 5. :
Fig. 5. :
Plots of accuracy (top) and loss (bottom) during training of the neural network; In both graphs, the blue line shows each metric calculated from the training dataset (Tr) and the orange line represents each metric calculated from a single image from randomly selected scan from the training dataset, called the validation dataset (Val). The colored lines represent the median value from ten training runs, and the shaded regions represent the range of values from the ten runs. The vertical solid black lines indicate potential stopping points based on the mean minimum moving average error from all training and validation (Tr + Val) metrics (solid, mean=184), Tr + Val loss metrics (dash, mean=137), and val loss (dash-dot, mean=149). The range of accuracy or loss depicts overfitting starting at 175 epochs. The mean epoch value closest to 175 but with validation loss value less than 175 was epoch=149.
Fig. 6. :
Fig. 6. :
Receiver operator characteristics (ROC) analysis as a function of threshold (T) values; (A) ROC curve and (B) a zoomed-in view of the ROC curve in (A), indicated by the red box. The black curve represents the mean TPR and FPR data as the threshold decreases (left to right). The red dashed line indicates the random classifier cutoff. (A) indicates that T is a strong classifier for determining brain vs non-brain pixels. In (B), the zoomed in graph shows specific values of T between 0.05 and 0.95, in increments of 0.05. Values in increments of 0.10 are listed above the corresponding points. Each point is color coded with a gradient from blue [T=0.95] to white [T=0.05]. Along the knee of the ROC curve, there are a range of values from 0.25 to 0.90 that marginally affect TPR and FPR (Supplementary Table S3). Within that range of T values, DC and HD values indicate that T=0.85 produces the highest level of segmentation accuracy.
Fig. 7. :
Fig. 7. :
Training graphs of the optimized OPI Rat 2D U-Net CNN. Graphs show (A) categorical accuracy and (B) loss for the model training and validation over 150 epochs. Training data are in blue, and validation data are in orange. Over the last 10 epochs [mean±std]: training accuracy=[0.991±0.001] and loss=[0.0052 ±0.0007], and validation accuracy=[0.989±0.006] and loss=[0.0050±0.0033].
Fig. 8. :
Fig. 8. :
Representative images and U-Net-generated segmentations of: (row 1) VEH, (row 2) DUO, (row 3) ALO, (row 4) MDZ and (row 5) DFP animals from the OPI study, and (row 6) WT and (row 7) TG animals from the AD study. Columns from left to right: anatomical MR image, 2D U-Net-generated segmentation label (matched pixels in green and unmatched pixels in red) overlaid on MR image, and skull-stripped MR image created with the 2D U-Net-generated label.
Fig. 9. :
Fig. 9. :
Representative images and U-Net generated segmentations of: (col 1) MDZ Day 3 scan and (col 2) DUO Day 3 scan. Rows top to bottom: anatomical MR image and 2D U-Net-generated segmentation label (matched pixels in green and unmatched pixels in red) overlaid on MR image. The MDZ Day 3 [DC: 0.9900] is the best segmented scan with the U-Net and DUO Day 28 [DC: 0.9356] is the worst segmentation. The white arrows indicate the difference in signal intensity in the cerebellum between an excellent segmentation versus a poor segmentation, where the U-Net performed suboptimally in the low signal region of the image.

Similar articles

Cited by

References

    1. Ashburner J, 2012. SPM: A history. Neuroimage 62–248, 791. 10.1016/J.NEUROIMAGE.2011.10.025. - DOI - PMC - PubMed
    1. Avants BB, Tustison NJ, Song G, Cook PA, Klein A, Gee JC, 2011. A reproducible evaluation of ANTs similarity metric performance in brain image registration. Neuroimage 54, 2033–2044. 10.1016/J.NEUROIMAGE.2010.09.025. - DOI - PMC - PubMed
    1. Almeida AJD, Hobson BA, Saito N, Harvey D, Bruun DA, Porter VA, Garbow JR, Chaudhari AJ, Lein PJ, Quantitative T2 mapping-based longitudinal assessment of brain injury and therapeutic rescue in the rat following acute organophosphate intoxication. Manuscript submitted for publication. - PMC - PubMed
    1. Azad R, Aghdam EK, Rauland A, Jia Y, Avval AH, Bozorgpour A, Karimijafarbigloo S, Cohen JP, Adeli E, Merhof D, 2022. Medical Image Segmentation Review: The success of U-Net. - PubMed
    1. Beare R, Lowekamp B, Yaniv Z, 2018. Image segmentation, registration and characterization in R with simpleITK. J. Stat. Softw 86, 1–35. 10.18637/JSS.V086.I08. - DOI - PMC - PubMed

Publication types