Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018;42(2):177-196.
doi: 10.1007/s10514-017-9615-3. Epub 2017 Feb 15.

Revisiting active perception

Affiliations

Revisiting active perception

Ruzena Bajcsy et al. Auton Robots. 2018.

Abstract

Despite the recent successes in robotics, artificial intelligence and computer vision, a complete artificial agent necessarily must include active perception. A multitude of ideas and methods for how to accomplish this have already appeared in the past, their broader utility perhaps impeded by insufficient computational power or costly hardware. The history of these ideas, perhaps selective due to our perspectives, is presented with the goal of organizing the past literature and highlighting the seminal contributions. We argue that those contributions are as relevant today as they were decades ago and, with the state of modern computational tools, are poised to find new life in the robotic perception systems of the next decade.

Keywords: Attention; Control; Perception; Sensing.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
The basic elements of Active Perception broken down into their constituent components. Instances of an embodiment of active perception would include the Why component and at least one of the remaining elements whereas a complete active agent would include at least one component from each
Fig. 2
Fig. 2
The current standard processing pipeline common in computer vision
Fig. 3
Fig. 3
Active perception processing pipeline

Similar articles

Cited by

References

    1. Abbott, A. L., & Ahuja, N. (1992, November). University of Illinois active vision system. In Applications in optical science and engineering (pp. 757–768). International Society for Optics and Photonics.
    1. Ackermann, E. (2016). How google wants to solve robotic grasping by letting robots learn for themselves. IEEE Spectrum, March 28, http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/goog...
    1. Aksoy E, Abramov A, Dörr J, Ning K, Dellen B, Wörgötter F. Learning the semantics of object-action relations by observation. The International Journal of Robotics Research. 2011;30:1229–1249. doi: 10.1177/0278364911410459. - DOI
    1. Allen, P. & Bajcsy, R. (1985). Two sensors are better than one: example of integration of vision and touch. Proceedings of 3rd ISRR, France, October.
    1. Allen, P. K. (1985). Object recognition using vision and touch. PhD dissertation: University of Pennsylvania.

LinkOut - more resources