Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 Feb 23:10:49.
doi: 10.3389/fnins.2016.00049. eCollection 2016.

A Dataset for Visual Navigation with Neuromorphic Methods

Affiliations

A Dataset for Visual Navigation with Neuromorphic Methods

Francisco Barranco et al. Front Neurosci. .

Abstract

Standardized benchmarks in Computer Vision have greatly contributed to the advance of approaches to many problems in the field. If we want to enhance the visibility of event-driven vision and increase its impact, we will need benchmarks that allow comparison among different neuromorphic methods as well as comparison to Computer Vision conventional approaches. We present datasets to evaluate the accuracy of frame-free and frame-based approaches for tasks of visual navigation. Similar to conventional Computer Vision datasets, we provide synthetic and real scenes, with the synthetic data created with graphics packages, and the real data recorded using a mobile robotic platform carrying a dynamic and active pixel vision sensor (DAVIS) and an RGB+Depth sensor. For both datasets the cameras move with a rigid motion in a static scene, and the data includes the images, events, optic flow, 3D camera motion, and the depth of the scene, along with calibration procedures. Finally, we also provide simulated event data generated synthetically from well-known frame-based optical flow datasets.

Keywords: calibration; dataset; event-driven methods; frame-free sensors; visual navigation.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Left: Pan-Tilt Unit FLIR PTU-46-17P70T at http://www.flir.com/mcs/view/?id=53707. Center: Pioneer 3DX Mobile Robot at http://www.mobilerobots.com/ResearchRobots/PioneerP3DX.aspx. Right: DAVIS240b sensor at http://inilabs.com
Figure 2
Figure 2
Depth registration from RGB-D sensor (top row) to DAVIS sensor (bottom row).
Figure 3
Figure 3
Left: Translation vector u of the DAVIS coordinate system with respect to the PTU, and r, the PTU rotation axis. The pose of the DAVIS sensor is represented by its axis s. Right: DAVIS coordinate system OD and PTU coordinate system OPTU. ODrt represents the DAVIS coordinate system after a pan-tilt rotation of the PTU, characterized by a translation t and the rotation R around its axis r. Image adapted from Bitsakos (2010).
Figure 4
Figure 4
Visualization of the error function from the minimization for pan (left) and tilt (right). The minimum error is marked on the sphere with a red star. The search is done in spherical coordinates over the rotation axis r, which has 2° of freedom. For each rotation we solve for the (best) translation.
Figure 5
Figure 5
Example sequences from the dataset. For each sequence we show: DAVIS APS frame (first row), depth map (second row), motion flow field (third row), and the rotation and translation values (in 10−2 rad/frame and 10−2 pix/frame). The color coding for the depth map uses cold colors for near and warm colors for far points. The motion flow fields are color-coded as in Baker et al. (2011), with the hue representing the direction of motion vectors and the saturation their value.

References

    1. Badino H., Huber D., Kanade T. (2015). The CMU Visual Localization Data Set. Available online at: http://3dvis.ri.cmu.edu/data-sets/localization/ (Accessed November 01, 2015).
    1. Baker S., Scharstein D., Lewis J. P., Roth S., Black M. J., Szeliski R. (2011). A database and evaluation methodology for optical flow. Int. J. Comput. Vis. 92, 1–31. 10.1007/s11263-010-0390-2 - DOI
    1. Barranco F., Fermuller C., Aloimonos Y. (2014). Contour motion estimation for asynchronous event-driven cameras. Proc. IEEE 102, 1537–1556. 10.1109/JPROC.2014.2347207 - DOI
    1. Barranco F., Fermuller C., Aloimonos Y. (2015). Bio-inspired motion estimation with event-driven sensors, in Advances in Computational Intelligence, eds Rojas I., Joya G., Catala A. (Palma de Mallorca: Springer; ), 309–321.
    1. Barranco F., Tomasi M., Diaz J., Vanegas M., Ros E. (2012). Parallel architecture for hierarchical optical flow estimation based on fpga. IEEE Trans. Very Large Scale Integr. Syst. 20, 1058–1067. 10.1109/TVLSI.2011.2145423 - DOI