Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 Sep 5;12(9):e0183309.
doi: 10.1371/journal.pone.0183309. eCollection 2017.

Detection of axonal synapses in 3D two-photon images

Affiliations

Detection of axonal synapses in 3D two-photon images

Cher Bass et al. PLoS One. .

Abstract

Studies of structural plasticity in the brain often require the detection and analysis of axonal synapses (boutons). To date, bouton detection has been largely manual or semi-automated, relying on a step that traces the axons before detection the boutons. If tracing the axon fails, the accuracy of bouton detection is compromised. In this paper, we propose a new algorithm that does not require tracing the axon to detect axonal boutons in 3D two-photon images taken from the mouse cortex. To find the most appropriate techniques for this task, we compared several well-known algorithms for interest point detection and feature descriptor generation. The final algorithm proposed has the following main steps: (1) a Laplacian of Gaussian (LoG) based feature enhancement module to accentuate the appearance of boutons; (2) a Speeded Up Robust Features (SURF) interest point detector to find candidate locations for feature extraction; (3) non-maximum suppression to eliminate candidates that were detected more than once in the same local region; (4) generation of feature descriptors based on Gabor filters; (5) a Support Vector Machine (SVM) classifier, trained on features from labelled data, and was used to distinguish between bouton and non-bouton candidates. We found that our method achieved a Recall of 95%, Precision of 76%, and F1 score of 84% within a new dataset that we make available for accessing bouton detection. On average, Recall and F1 score were significantly better than the current state-of-the-art method, while Precision was not significantly different. In conclusion, in this article we demonstrate that our approach, which is independent of axon tracing, can detect boutons to a high level of accuracy, and improves on the detection performance of existing approaches. The data and code (with an easy to use GUI) used in this article are available from open source repositories.

PubMed Disclaimer

Conflict of interest statement

Competing Interests: The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Examples from the 3D axon dataset.
A, image with several crossing axons. B, image with 2 crossing axons with low intensity. C, image with high intensity noise (on the right). D, image with blob-like noise.
Fig 2
Fig 2. Flow chart of the bouton detection method.
Our proposed algorithm has 5 main steps. (1) A negative Laplacian of Gaussian (LoG) mask is used in order to enhance blob-like objects (i.e. boutons) in the mean intensity projected image. (2) An interest point detector then detects the possible bouton locations. (3) Non-maximum suppression is used to move candidate boutons to their local maxima, and removes multiple detections of the same bouton in a close local area. (4) Feature vectors (with 12 elements each) are then generated at the location of the detected interest points. (5) A trained SVM classifies the points as boutons or non-boutons. (6) The last step uses the 2D coordinates to define a search volume for the 3rd coordinate.
Fig 3
Fig 3. Example of the bouton detection method step by step.
A, A mean intensity projection image from the 3D axon dataset. B, The same image convolved with LoG mask. C, In this example Interest points were detected using SURF (green “+” signs). D, Following SVM classification, the final proposed boutons are plotted on the mean projection image (white “+” signs).
Fig 4
Fig 4. Example of bouton features generated using Gabor filters.
On the left, there are examples of 4 different types of patches extracted from the interest points (in this example, by the SURF detector). The first 2 are examples of boutons, the 3rd is an example of noise, and the 4th an example of an axon segment. The image patches are then convolved with 12 different Gabor filters and their inner product is computed to create a 12-dimensional feature vector. The colorbar shows the pixel intensities.
Fig 5
Fig 5. Examples of how True Positives (TP), False Positives (FP), False Negative (FN) and True Negatives (TN) were classified.
For the calculation of these scores, we manually labelled boxes around the correct boutons. A point was classified as a TP only when its x and y coordinates lie within one of the boxes, and only 1 TP was counted per box (i.e. if there are 2 points within the box, 1 would be counted as a TP, and one would be counted as FP). FNs were classified for the number of detected boxes that did not have any points. The TNs were all the other points in the image (not including the 25 × 25 boxes around all other TPs, FPs, and FNs).
Fig 6
Fig 6. Graphs comparing the performance of the descriptors and interest point detectors at 103 SVM class thresholds.
We chose Gabor and SURF, as our descriptor and interest point detector, as they had better performance than the other methods (all with separately optimized hyperparameters). The precision-recall graphs seem to have an unusual curvature; however, this can be explained by the nature of the dataset. In this axon dataset, where the number of TPs (i.e. boutons) is relatively small compared to the size of the image, it is to be expected that there will always be some FP detections when TP points are also detected. As such, there will never be a case in which precision = 1, as there will always be some FPs detected as well (i.e. the SVM can not have a FPR of 0). A, Precision-Recall curve comparing feature descriptors (AUC: Gabor = 0.779, HOG = 0.728, SIFT = 0.75). Gabor based descriptors reached the highest Precision, and has the best overall performance, demonstrated by the AUC. B, Precision-Recall curve comparing interest point detectors (AUC: SURF = 0.779, Harris = 0.598, SIFT = 0.357). SURF reaches the best TPR in comparison to the other methods. C, ROC curve comparing feature descriptors (AUC: Gabor = 1.8 × 10−5, HOG = 1.65 × 10−5, SIFT = 1.49 × 10−5). Gabor has the best overall performance, demonstrated by the AUC. D, ROC curve comparing interest point detectors (AUC: SURF = 1.8 × 10−5, Harris = 1.08 × 10−5, SIFT = 4.69 × 10−6). SURF reaches the best Recall in comparison to the other methods. E-F, Error bar graphs comparing metrics between the descriptors and interest point detectors, respectively. Gabor and SIFT have the best overall performance across the metrics compared. The dotted lines are where the graphs saturate. TPR, True Positive Rate; FPR, False Positive Rate; FP, False Positive; TP, True Positive; Error bars, SEM; AUC, Area Under Curve.
Fig 7
Fig 7. Comparative scores of our bouton detection algorithm versus EPBscore results.
Results of detection in EBPscore versus our algorithm in the test dataset (A-B), and in a published dataset (C-D) [23]. A, Example of bouton detection in the test dataset. The white boxes indicate the true positive boutons, and the purple crosses are the boutons detected by the algorithm, and green/ teal crosses are boutons detection by EPBscore. B, The proposed bouton detection method has significantly better Recall (p < 10−5, KS test) and F1 (p < 10−5, unpaired two-tailed t-test) scores than EBPscore. C, Example of bouton detection in the published dataset. D, In the published dataset, the proposed bouton detection method has significantly better Precision (p = 0.04), Recall (p = 0.002, unpaired two-tailed t-test) and F1 (p = 0.004, unpaired two-tailed t-test) scores than EBPscore. Error Bars, SEM; Kolmogorov-Smirnov, KS.

References

    1. Peng H, Hawrylycz M, Roskams J, Hill S, Spruston N, Meijering E, et al. BigNeuron: Large-Scale 3D Neuron Reconstruction from Optical Microscopy Images. Neuron. 2015;87(2):252–256. 10.1016/j.neuron.2015.06.036 - DOI - PMC - PubMed
    1. Chen H, Xiao H, Liu T, Peng H. SmartTracing: self-learning-based Neuron reconstruction. Brain Informatics. 2015;2(3):135–144. 10.1007/s40708-015-0018-y - DOI - PMC - PubMed
    1. Xiao H, Peng H. APP2: Automatic tracing of 3D neuron morphology based on hierarchical pruning of a gray-weighted image distance-tree. Bioinformatics. 2013;29(11):1448–1454. 10.1093/bioinformatics/btt170 - DOI - PMC - PubMed
    1. Peng H, Long F, Myers G. Automatic 3D neuron tracing using all-path pruning. Bioinformatics. 2011;27(13):i239–i247. 10.1093/bioinformatics/btr237 - DOI - PMC - PubMed
    1. Zhou Z, Liu X, Long B, Peng H. TReMAP: Automatic 3D Neuron Reconstruction Based on Tracing, Reverse Mapping and Assembling of 2D Projections. Neuroinformatics. 2016;14(1):41–50. 10.1007/s12021-015-9278-1 - DOI - PubMed