Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Jun 19:12:RP91398.
doi: 10.7554/eLife.91398.

Machine learning of dissection photographs and surface scanning for quantitative 3D neuropathology

Affiliations

Machine learning of dissection photographs and surface scanning for quantitative 3D neuropathology

Harshvardhan Gazula et al. Elife. .

Abstract

We present open-source tools for three-dimensional (3D) analysis of photographs of dissected slices of human brains, which are routinely acquired in brain banks but seldom used for quantitative analysis. Our tools can: (1) 3D reconstruct a volume from the photographs and, optionally, a surface scan; and (2) produce a high-resolution 3D segmentation into 11 brain regions per hemisphere (22 in total), independently of the slice thickness. Our tools can be used as a substitute for ex vivo magnetic resonance imaging (MRI), which requires access to an MRI scanner, ex vivo scanning expertise, and considerable financial resources. We tested our tools on synthetic and real data from two NIH Alzheimer's Disease Research Centers. The results show that our methodology yields accurate 3D reconstructions, segmentations, and volumetric measurements that are highly correlated to those from MRI. Our method also detects expected differences between post mortem confirmed Alzheimer's disease cases and controls. The tools are available in our widespread neuroimaging suite 'FreeSurfer' (https://surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools).

Keywords: dissection photography; human; machine learning; neuroscience; surface scanning; volumetry.

Plain language summary

Every year, thousands of human brains are donated to science. These brains are used to study normal aging, as well as neurological diseases like Alzheimer’s or Parkinson’s. Donated brains usually go to ‘brain banks’, institutions where the brains are dissected to extract tissues relevant to different diseases. During this process, it is routine to take photographs of brain slices for archiving purposes. Often, studies of dead brains rely on qualitative observations, such as ‘the hippocampus displays some atrophy’, rather than concrete ‘numerical’ measurements. This is because the gold standard to take three-dimensional measurements of the brain is magnetic resonance imaging (MRI), which is an expensive technique that requires high expertise – especially with dead brains. The lack of quantitative data means it is not always straightforward to study certain conditions. To bridge this gap, Gazula et al. have developed an openly available software that can build three-dimensional reconstructions of dead brains based on photographs of brain slices. The software can also use machine learning methods to automatically extract different brain regions from the three-dimensional reconstructions and measure their size. These data can be used to take precise quantitative measurements that can be used to better describe how different conditions lead to changes in the brain, such as atrophy (reduced volume of one or more brain regions). The researchers assessed the accuracy of the method in two ways. First, they digitally sliced MRI-scanned brains and used the software to compute the sizes of different structures based on these synthetic data, comparing the results to the known sizes. Second, they used brains for which both MRI data and dissection photographs existed and compared the measurements taken by the software to the measurements obtained with MRI images. Gazula et al. show that, as long as the photographs satisfy some basic conditions, they can provide good estimates of the sizes of many brain structures. The tools developed by Gazula et al. are publicly available as part of FreeSurfer, a widespread neuroimaging software that can be used by any researcher working at a brain bank. This will allow brain banks to obtain accurate measurements of dead brains, allowing them to cheaply perform quantitative studies of brain structures, which could lead to new findings relating to neurodegenerative diseases.

PubMed Disclaimer

Conflict of interest statement

HG, HT, BB, YB, JW, RH, LD, AC, EM, CL, MK, MM, ER, EB, MM, TC, DO, MF, SY, KV, AD, CM, CK, JI No competing interests declared, BF BF has a financial interest in CorticoMetrics, a company developing brain MRI measurementtechnology; his interests are reviewed and managed by Massachusetts General Hospital, BH Reviewing editor, eLife

Figures

Figure 1.
Figure 1.. Examples of inputs and outputs from the MADRC dataset.
(a) Three-dimensional (3D) surface scan of left human hemisphere, acquired prior to dissection. (b) Routine dissection photography of coronal slabs, after pixel calibration, with digital rulers overlaid. (c) 3D reconstruction of the photographs into an imaging volume. (d) Sagittal cross-section of the volume in (c) with the machine learning segmentation overlaid. The color code follows the FreeSurfer convention. Also, note that the input has low, anisotropic resolution due to the large thickness of the slices (i.e., rectangular pixels in sagittal view), whereas the 3D segmentation has high, isotropic resolution (squared pixels in any view). (e) 3D rendering of the 3D segmentation into the different brain regions, including hippocampus (yellow), amygdala (light blue), thalamus (green), putamen (pink), caudate (darker blue), lateral ventricle (purple), white matter (white, transparent), and cortex (red, transparent). (f) Distribution of hippocampal volumes in post mortem confirmed Alzheimer’s disease vs controls in the MADRC dataset, corrected for age and gender.
Figure 2.
Figure 2.. Qualitative comparison of SAMSEG vs Photo-SynthSeg: coronal (top) and sagittal (bottom) views of the reconstruction and automated segmentation of a sample whole brain from the UW-ADRC dataset.
Note that Photo-SynthSeg supports subdivision of the cortex with tools of the SynthSeg pipeline.
Figure 3.
Figure 3.. Dice scores of automated vs manual segmentations on select slices.
Box plots are shown for SAMSEG, Photo-SynthSeg, and two ablations: use of probabilistic atlas and targeted simulation with 4 mm slice spacing. Dice is computed in two-dimensional (2D), using manual segmentations on select slices. We also note that the absence of extracerebral tissue in the images contributes to high Dice for the cortex.
Figure 4.
Figure 4.. Reconstruction error (in mm) in synthetically sliced HCP data.
The figure shows box plots for the mean reconstruction error as a function of spacing and thickness jitter. A jitter of j means that the nth slice is randomly extracted from the interval [nj,n+j] (rather than exactly n). The center of each box represents the median; the edges of the box represent the first and third quartiles; and the whiskers extend to the most extreme data points not considered outliers (not shown, in order not to clutter the plot).
Figure 5.
Figure 5.. Steps of proposed processing pipeline.
(a) Dissection photograph with brain slices on black board with fiducials. (b) Scale-invariant feature transform (SIFT) features for fiducial detection. (c) Photograph from (a) corrected for pixel size and perspective, with digital ruler overlaid. (d) Segmentation against the background, grouping pieces of tissue from the same slice. (e) Sagittal slice of the initialization of a three-dimensional (3D) reconstruction. (f) Corresponding slice of the final 3D reconstruction, obtained with a surface as reference (overlaid in yellow). (g) Corresponding slice of the 3D reconstruction provided by a probabilistic atlas (overlaid as a heat map); the real surface is overlaid in light blue for comparison.
Figure 6.
Figure 6.. Intermediate steps in the generative process.
(a) Randomly sampled input label map from the training set. (b) Spatially augmented input label map; imperfect 3D reconstruction is simulated with a deformation jitter across the coronal plane. (c) Synthetic image obtained by sampling from a Gaussian mixture model conditioned on the segmentation, with randomized means and variances. (d) Slice spacing is simulated by downsampling to low resolution. This imaging volume is further augmented with a bias field and intensity transforms (brightness, contrast, gamma). (e) The final training image is obtained by resampling (d) to high resolution. The neural network is trained with pairs of images like (e) (input) and (b) (target).
Appendix 1—figure 1.
Appendix 1—figure 1.. Simulation and reconstruction of synthetic data.
Top row: skull stripped T1 scan and (randomly translated and rotated) binary mask of the cerebrum, in yellow. Second row: original T2 scan. Third row: randomly sliced and linearly deformed T2 images. Bottom row: output of the 3D reconstruction algorithm, that is, reconstructed T2 slices and registered reference mask overlaid in yellow.
Appendix 1—figure 2.
Appendix 1—figure 2.. Reconstruction with surface scan vs probabilistic atlas.
(a) Initialization, with contour of 3D surface scan superimposed. (b) Reconstruction with 3D surface scan. (c) Reconstruction with probabilistic atlas (overlaid as heat map with transparency); the contour of the surface scan is overlaid in light blue, for comparison. Even though the shape of the reconstruction in (c) is plausible, it is clearly inaccurate in light of the surface scan.
Appendix 1—figure 3.
Appendix 1—figure 3.. Example of mid-coronal slice selected for manual segmentation and computation of Dice scores.
Compared with the FreeSurFer protocol, we merge the ventral diencephalon (which has almost no visible contrast in the photographs) with the cerebral white matter in our manual delineations. We also merged this structures in the automated segmentations from SAMSEG and Photo-SynthSeg in this figure, for a more consistent comparison.

Update of

References

    1. Abadi M, Barham P, Chen J, Chen Z, Davis A. Tensorflow: A system for large-scale machine learning. Symposium on Operating Systems Design and Implementation; 2016. pp. 265–283.
    1. Akkus Z, Galimzianova A, Hoogi A, Rubin DL, Erickson BJ. Deep Learning for brain mri segmentation: State of the art and future directions. Journal of Digital Imaging. 2017;30:449–459. doi: 10.1007/s10278-017-9983-4. - DOI - PMC - PubMed
    1. Billot B, Greve D, Van Leemput K, Fischl B, Iglesias JE, Dalca A. A Learning Strategy for Contrast-agnostic MRI Segmentation. Medical Imaging with Deep Learning; 2020. pp. 75–93.
    1. Billot B, Greve DN, Puonti O, Thielscher A, Van Leemput K, Fischl B, Dalca AV, Iglesias JE, ADNI SynthSeg: Segmentation of brain MRI scans of any contrast and resolution without retraining. Medical Image Analysis. 2023a;86:102789. doi: 10.1016/j.media.2023.102789. - DOI - PMC - PubMed
    1. Billot B, Magdamo C, Cheng Y, Arnold SE, Das S, Iglesias JE. Robust machine learning segmentation for large-scale analysis of heterogeneous clinical brain MRI datasets. PNAS. 2023b;120:e2216399120. doi: 10.1073/pnas.2216399120. - DOI - PMC - PubMed

Grants and funding