Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 May 22;15(1):4102.
doi: 10.1038/s41467-024-48146-y.

A modular framework for multi-scale tissue imaging and neuronal segmentation

Affiliations

A modular framework for multi-scale tissue imaging and neuronal segmentation

Simone Cauzzo et al. Nat Commun. .

Abstract

The development of robust tools for segmenting cellular and sub-cellular neuronal structures lags behind the massive production of high-resolution 3D images of neurons in brain tissue. The challenges are principally related to high neuronal density and low signal-to-noise characteristics in thick samples, as well as the heterogeneity of data acquired with different imaging methods. To address this issue, we design a framework which includes sample preparation for high resolution imaging and image analysis. Specifically, we set up a method for labeling thick samples and develop SENPAI, a scalable algorithm for segmenting neurons at cellular and sub-cellular scales in conventional and super-resolution STimulated Emission Depletion (STED) microscopy images of brain tissues. Further, we propose a validation paradigm for testing segmentation performance when a manual ground-truth may not exhaustively describe neuronal arborization. We show that SENPAI provides accurate multi-scale segmentation, from entire neurons down to spines, outperforming state-of-the-art tools. The framework will empower image processing of complex neuronal circuitries.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Fig. 1
Fig. 1. Labeling strategy for thick imaging in 3D super-resolution STED microscopy.
A Embedding the tissue in hydrogel and cutting 6 slices (0.5–1 mm thick) using a vibratome. B Passive clearing via incubation in clarity solution. C Labeling protocol using primary and secondary fluorescent dedicated STED antibodies. DG Floating slices mounted on a slide in the center of a homemade PVC spacer. H, I Spacer hole filled with prolong gold mounting media and covered with super-resolution 0.17 mm glass coverslip. Depending on slice thickness, several spacers can be stacked to keep coverslip parallel to the slide. J, K Spacers and coverslip sealed to the slide with silicone. L Confocal tile imaging of the cerebellum. M Confocal zoom in over Purkinje cells with 20x objective. N Conventional 3D confocal imaging over the Purkinje cell layer with dedicated Zoom in (O) around the cell body. P Single slice of 3D STED microscopy, revealing dendritic spine (white arrows) decorating the entire dendritic tree. Q 3D rendering of multiple 3D STED stacked slices showing the high density of dendritic spines within depth. Scale bar: 10 microns.
Fig. 2
Fig. 2. The SENPAI algorithm.
A Test datasets: 40x confocal (left) and 93x STED (right). SENPAI was tested on cleared samples using confocal 40x (27 neurons) and 93x STED (5 neuron branches) datasets. B The SENPAI rationale, based on the selection of classes displaying negative values of the second derivative along the three main axes. C Rationale for the estimation of the K parameter of the K-means clustering: the selected K is the one for which the histograms of second-order derivatives show three different classes achieving maximal average value: here, from the histogram of D2x (second derivative along the x axis), class 3 (cyan) encodes outer borders along the x direction, as it is the only class with values clearly above 0; similarly, class 4 encodes outer borders along the y direction (D2y histogram); class 2 along the z direction (D2z histogram); class 1 encodes both low-and high-intensity homogeneous image portions, and is labeled as background along with classes 2, 3 and 4. D SENPAI workflow—Step 1: K-means clustering, performed on the unsmoothed image (top) and optionally, in parallel, on the image smoothed with a 3D Gaussian filter (below). Class selection is performed independently on K-means classes (color-coded as in (C) for each clustering level). Resulting binarized images (green for clustering level 1, pink for clustering level 2, white for the overlap) are merged by logic OR. E SENPAI workflow—Step 2 for neuron separation: the segmented image is parcellated using morphological reconstruction and 3D watershed transform computed on the morphologically reconstructed grayscale image and applied to the binary segmentation. Left: raw image; Right: isolation of connected structures belonging to the same neuron; soma markers (yellow, placed by user) define wells for the catchment basins (edges in gray). F SENPAI workflow—Step 2 applied to spine assignation. Left: raw image; Middle: 3D rendering; Left: 2D rendering of the parcellation with the connection of a neuronal portion (e.g., a dendrite branch) to smaller clusters (i.e., dendritic spines). Groups of neuronal clusters assigned to a single neuronal entity are displayed in different colors in their relative catchment basin (gray).
Fig. 3
Fig. 3. 3D visualization of the original confocal dataset and comparison of the performance of the algorithms tested against SENPAI.
Dataset acquired with a confocal microscope equipped with a 40x objective (details in Methods). All images are produced with the Icy GUI. Tracings are converted to volumetric segmentations with the SWC2IMG ImageJ plugin. The same 10 neighboring neurons and one exemplary single neuron are depicted using the same color code for all tools. A further example is depicted within a 3D rendering of the original image in Supplementary Movie 2.
Fig. 4
Fig. 4. Quantitative comparison of the segmentations on 40x images.
A Schematization of the rationale behind 3D Sholl Analysis: we compute the Area Under the Curve (AUC) for the number of crossings of each neuronal structure for a sphere with increasing radius centered on the root node (i.e., the soma centroid); B Quantitative comparison for segmentations from 27 neurons on 40x images; left: Area-Volume ratio for SENPAI (magenta), HK-Icy (blue) and Ilastik (green) (two-sided Friedman test, p = 1.02 * 10−9, horizontal bars mark significant differences as determined with post-hoc comparison using Tukey’s honest significant criterion, p < 10−5); right: AUC of the Sholl analysis across segmentations obtained with SENPAI (magenta), HK-Icy (blue), Ilastik (green), NeuroGPS (red) and NeuTube (gray), (two-sided Friedman test, p = 6.81 * 10−17, horizontal bars mark significant differences as determined with post-hoc comparison using Tukey’s honest significant criterion, p < 10−2). Source data are provided as a Source Data file.
Fig. 5
Fig. 5. Comparison against documented information on Purkinje cell morphometrics, based on Strahler Ordering.
A Schematization of SO and of the meaning of some of the extracted features; B Schematization of the validation pipeline; C Graphical comparison of the SO of Purkinje cells found in the literature and measured on the neurons segmented with SENPAI and the state-of-the-art tools. The parameters based on the SO reported by Vormberg et al. on six types of neuronal cells (left, edited from Vormberg et al.) and computed on 27 Purkinje neurons segmented with SENPAI (magenta), HK-Icy (blue), Ilastik (green), NeuroGPS (red) and NeuTube (gray) from 40x images (right). It should be noted that the parameters, with the exception of the Branch Bifurcation Ratio, were computed only for neurons whose SN was equal to their mode, in line with Vormberg et al. Source data are provided as a Source Data file.
Fig. 6
Fig. 6. Segmentations of 5 dendritic branches—Na to Ne—and their localization within a 3D multistack 93x STED dataset.
A Left: the dendritic branches Na to Ne contained within a single stack and employed here for the validation of SENPAI are highlighted in red. Right: we compare the segmentations obtained with the state-of-the-art algorithms. The segmentations were performed with SENPAI, HK-Icy, Ilastik and manual segmentation by ManSegTool. Algorithm segmentations of the whole image are underlaid in gray, while the parcellation outcomes are highlighted in red. B Quantitative comparison for the segmentations of the 5 dendritic branches; volume and area obtained with SENPAI, Ilastik and HK-Icy. The values are normalized with respect to manual segmentations. HK-Icy and Ilastik were integrated with the SENPAI parcellation step to, respectively, assign spines to the dendrite and separate touching dendrites from each other (volume result showed significant differences; Friedman p = 0.015; post-hoc with Conover’s test highlighted Ilastik difference to both SENPAI and HK-Icy, p < 0.01, highlighted with gray bars). C Quantitative comparison of the segmentations of the dendritic branches; the Dice coefficient was used to compare the segmentations obtained with the algorithms against the manual segmentation, considering the whole neuron (top) and the only spines (bottom). An ideal algorithm would give a Dice coefficient of 1. The analysis on the spines was conducted by masking out the manual segmentation of the dendrite (both whole neuron and average spine Dice coefficients from SENPAI were different to both Ilastik and HK-Icy; Friedman tests p = 0.015; post-hoc with Conover’s tests, p < 0.01). D Table summarizing algorithm performance in terms of spine detection. Spines were counted in the manual segmentation, then we defined False Positives (FP) and False Negatives (FN), True Positives (TP), the Sensitivity (S% = % of TP over TP + FN) and the Precision (P% = % of TP over TP + FP) for SENPAI, HK-Icy and Ilastik. Source data are provided as a Source Data file.
Fig. 7
Fig. 7. Exemplary results obtained on non-clarified samples.
For both images we report the maximum projection on the left, the segmentation obtained with SENPAI on the middle (depth color-coded, cold colors indicate deeper planes), and the skeletonization of the segmentation on the right, as obtained using the NeuTube software. Above Exemplary dataset (m16_cing_1_9_cropped_neurona.v3dpbd, human pyramidal cell labeled with Lucifer Yellow and acquired through confocal microscopy, resolution 0.24 µm x 0.24 µm x 0.42 µm) from Benavides-Piccione et al. available from the BigNeuron gold166 standard. Below 3D stack of cultured rat hippocampal pyramidal cells (pixel size 91.41 nm x 91.41 nm x 280 nm). Further tests on non-clarified samples are reported in the Supplementary Information.

References

    1. Amunts K, et al. The human brain project: creating a European research infrastructure to decode the human brain. Neuron. 2016;92:574–581. doi: 10.1016/j.neuron.2016.10.046. - DOI - PubMed
    1. Magliaro C, Callara AL, Vanello N, Ahluwalia A. Gotta trace ‘em all: a mini-review on tools and procedures for segmenting single neurons toward deciphering the structural connectome. Front. Bioeng. Biotechnol. 2019;7:202. doi: 10.3389/fbioe.2019.00202. - DOI - PMC - PubMed
    1. Redolfi, A. et al. Italian, European, and international neuroinformatics efforts: an overview. Eur. J. Neurosci. 57, 2017–2039 (2022). - PubMed
    1. Helmstaedter M. Cellular-resolution connectomics: challenges of dense neural circuit reconstruction. Nat. Methods. 2013;10:501–507. doi: 10.1038/nmeth.2476. - DOI - PubMed
    1. Lagache T, et al. Mapping molecular assemblies with fluorescence microscopy and object-based spatial statistics. Nat. Commun. 2018;9:698. doi: 10.1038/s41467-018-03053-x. - DOI - PMC - PubMed