Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 May 28;16(6):782.
doi: 10.3390/s16060782.

A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors

Affiliations

A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors

Ricardo Acevedo-Avila et al. Sensors (Basel). .

Abstract

Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms.

Keywords: embedded computer vision; field programmable gate array (FPGA); object detection.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Blob detection and tracking. Full system overview.
Figure 2
Figure 2
The four tests that comprise the row connectivity test.
Figure 3
Figure 3
Relationships between linked-lists and bin data structure.
Figure 4
Figure 4
Detection Case 1: detection order changes.
Figure 5
Figure 5
Detection Case 2: long run.
Figure 6
Figure 6
Detection Case 3: blob termination.
Figure 7
Figure 7
Run list. The first row is called the origin row, while the second is the destiny row.
Figure 8
Figure 8
Sequential processing of two runs in a list of SDO=3. Each node is depicted as (Row,Key).
Figure 9
Figure 9
Concave up shape before and after correction.
Figure 10
Figure 10
Application example: complete video surveillance embedded system.
Figure 11
Figure 11
General overview of the blob detection co-processor.
Figure 12
Figure 12
Simplified control FSM diagram for hardware implementation.
Figure 13
Figure 13
The bin list implemented as a register-based array. In this figure, a maximum of three objects can be stored.
Figure 14
Figure 14
The label list implemented as a register-based array. In this figure, a maximum of three labels can be stored.
Figure 15
Figure 15
The free bins component. In this figure, a maximum of three bins can be used.
Figure 16
Figure 16
Results from the blob detection FPGA sub-system.
Figure 17
Figure 17
Outdoor PETS2001database blob detection.
Figure 18
Figure 18
Test Image 1.
Figure 19
Figure 19
Test Image 2.
Figure 20
Figure 20
Complex test images.
Figure 21
Figure 21
Complex test images after correction is applied.
Figure 22
Figure 22
Input images from the USC-SIPI image database.
Figure 23
Figure 23
Simultaneously-detectable objects vs. frame processing rates for different image resolutions.

References

    1. Malamas E.N., Petrakis E.G.M., Zervakis M., Petit L., Legat J. A survey on industrial vision systems, applications and tools, image and vision computing. Image Vis. Comput. 2003;21:171–188. doi: 10.1016/S0262-8856(02)00152-X. - DOI
    1. Kastrinaki V., Zervakis M.E., Kalaitzakis K. A survey of video processing techniques for traffic applications. Image Vis. Comput. 2003;21:359–381. doi: 10.1016/S0262-8856(03)00004-0. - DOI
    1. Hu W., Tan T., Wang L., Maybank S. A survey on visual surveillance of object motion and behaviors. IEEE Trans. Syst. Man Cybern. 2004;34:334–352. doi: 10.1109/TSMCC.2004.829274. - DOI
    1. Camplani M., Mantecon T., Salgado L. Depth-Color Fusion Strategy for 3-D Scene Modeling With Kinect. IEEE Trans. Cybern. 2013;43:1560–1571. doi: 10.1109/TCYB.2013.2271112. - DOI - PubMed
    1. Kumar V., Todorov E. MuJoCo HAPTIX: A virtual reality system for hand manipulation; Proceedings of the 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids); Seoul, Korea. 3–5 November 2015; pp. 657–663.