Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2015 Dec 12;15(12):31362-91.
doi: 10.3390/s151229861.

Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers

Affiliations

Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers

Miguel A Olivares-Mendez et al. Sensors (Basel). .

Abstract

Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $ 213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing.

Keywords: animal tracking; anti-poaching; autonomous landing; autonomous navigation; computer vision; face detection; object following; unmanned aerial vehicles; vision-based control.

PubMed Disclaimer

Figures

Figure 1
Figure 1
The PCA subspace-based tracking of a 3D rhino in our work, where each rhino image is re-sized to 32 × 32 pixels, and the reconstructed rhino image is constructed using the eigenbasis. Moreover, the eigenbasis images are sorted based on their according eigenvalues.
Figure 2
Figure 2
The dynamic model of visual rhino tracking.
Figure 3
Figure 3
Our adaptive visual tracker for 3D animal tracking. The k-th frame is downsampled to create the multi-resolution structure (middle). In the motion model propagation, lower resolution textures are also initially used to reject the majority of samples at relatively low cost, leaving a relatively small number of samples to be processed at higher resolutions. The IPSLp represents the incremental PCA subspace learning-based (IPSL) tracker in the p-th level of the pyramid.
Figure 4
Figure 4
Reference rectangle of the ground truth. The reference rectangle has included all of the pixels of the tracked animal, and the pink points are key pixels for locating the reference rectangle.
Figure 5
Figure 5
Visual rhino tracking. The red rectangle shows the estimated location of the running rhino.
Figure 6
Figure 6
Visual rhino tracking.
Figure 7
Figure 7
Visual elephant tracking.
Figure 8
Figure 8
Visual elephant tracking.
Figure 9
Figure 9
Integral image features used in boosting cascade face detections.
Figure 10
Figure 10
Total set of features used by the OpenCV detection.
Figure 11
Figure 11
Example of features. (a) the nose-bridge tends to be brighter than the eyes; (b) the forehead being brighter than the eye region below.
Figure 12
Figure 12
AscTec Firefly with the mounted uEye camera.
Figure 13
Figure 13
Shadow example: The detection is stable even during faster movement of the drone.
Figure 14
Figure 14
Direct sunlight example: note the detection of the person standing in the shadow.
Figure 15
Figure 15
Fly-over example.
Figure 16
Figure 16
Final design of the variables of the fuzzy controller after the manual tuning process in the virtual environment (V-REP).
Figure 17
Figure 17
Youbot platform with the ArUco target.
Figure 18
Figure 18
Evolution of the error of the lateral, longitudinal, vertical and heading controllers on the first moving target-following experiment.
Figure 19
Figure 19
Evolution of the error of the lateral, longitudinal, vertical and heading controllers on the second moving target-following experiment.
Figure 20
Figure 20
Evolution of the error of the lateral, longitudinal, vertical and heading controllers on the second autonomous landing on a moving target experiment.
Figure 21
Figure 21
Evolution of the error of the lateral, longitudinal, vertical and heading controllers on the third autonomous landing on a moving target experiment.

References

    1. Olivares-Mendez M.A., Bissyandé T.F., Somasundar K., Klein J., Voos H., le Traon Y. The NOAH project: Giving a chance to threatened species in Africa with UAVs; Proceedings of the Fifth International EAI Conference on e-Infrastructure and e-Services for Developing Countries; Blantyre, Malawi. 25–27 November 2013; pp. 198–208.
    1. SPOTS-Air Rangers. [(accessed on 15 October 2013)]. Available online: http://www.spots.org.za/
    1. Yilmaz A., Javed O., Shah M. Object tracking: A survey. ACM Comput. Surv. 2006;38 doi: 10.1145/1177352.1177355. - DOI
    1. Smeulder A., Chu D., Cucchiara R., Calderara S., Deghan A., Shah M. Visual Tracking: An Experimental Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2013;36:1442–1468. - PubMed
    1. Fotiadis E.P., Garzon M., Barrientos A. Human Detection from a Mobile Robot Using Fusion of Laser and Vision Information. Sensors. 2013;13:11603–11635. doi: 10.3390/s130911603. - DOI - PMC - PubMed