A Benchmark Environment for Neuromorphic Stereo Vision
- PMID: 34095240
- PMCID: PMC8170485
- DOI: 10.3389/frobt.2021.647634
A Benchmark Environment for Neuromorphic Stereo Vision
Abstract
Without neuromorphic hardware, artificial stereo vision suffers from high resource demands and processing times impeding real-time capability. This is mainly caused by high frame rates, a quality feature for conventional cameras, generating large amounts of redundant data. Neuromorphic visual sensors generate less redundant and more relevant data solving the issue of over- and undersampling at the same time. However, they require a rethinking of processing as established techniques in conventional stereo vision do not exploit the potential of their event-based operation principle. Many alternatives have been recently proposed which have yet to be evaluated on a common data basis. We propose a benchmark environment offering the methods and tools to compare different algorithms for depth reconstruction from two event-based sensors. To this end, an experimental setup consisting of two event-based and one depth sensor as well as a framework enabling synchronized, calibrated data recording is presented. Furthermore, we define metrics enabling a meaningful comparison of the examined algorithms, covering aspects such as performance, precision and applicability. To evaluate the benchmark, a stereo matching algorithm was implemented as a testing candidate and multiple experiments with different settings and camera parameters have been carried out. This work is a foundation for a robust and flexible evaluation of the multitude of new techniques for event-based stereo vision, allowing a meaningful comparison.
Keywords: 3D reconstruction; benchmark; event-based stereo vision; neuromorphic applications; neuromorphic sensors.
Copyright © 2021 Steffen, Elfgen, Ulbrich, Roennau and Dillmann.
Conflict of interest statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Figures








References
-
- Binas J., Neil D., Liu S.-C., Delbruck T. (2017). DDD17: End-To-End DAVIS Driving Dataset.arXiv.
-
- Delbruck T. (2008). Frame-free Dynamic Digital Vision. Symp. Secure-Life Electronics, Adv. Electron. Qual. Life Soc. 12, 21–26.
-
- Delbruck T. (2016). Neuromorophic Vision Sensing and Processing. Eur. Solid-State Device Res. Conf. 7, 7–14. 10.1109/ESSDERC.2016.7599576 - DOI
LinkOut - more resources
Full Text Sources