Optimized Memory Allocation and Power Minimization for FPGA-Based Image Processing
- PMID: 34465704
- PMCID: PMC8320856
- DOI: 10.3390/jimaging5010007
Optimized Memory Allocation and Power Minimization for FPGA-Based Image Processing
Abstract
Memory is the biggest limiting factor to the widespread use of FPGAs for high-level image processing, which require complete frame(s) to be stored in situ. Since FPGAs have limited on-chip memory capabilities, efficient use of such resources is essential to meet performance, size and power constraints. In this paper, we investigate allocation of on-chip memory resources in order to minimize resource usage and power consumption, contributing to the realization of power-efficient high-level image processing fully contained on FPGAs. We propose methods for generating memory architectures, from both Hardware Description Languages and High Level Synthesis designs, which minimize memory usage and power consumption. Based on a formalization of on-chip memory configuration options and a power model, we demonstrate how our partitioning algorithms can outperform traditional strategies. Compared to commercial FPGA synthesis and High Level Synthesis tools, our results show that the proposed algorithms can result in up to 60% higher utilization efficiency, increasing the sizes and/or number of frames that can be accommodated, and reduce frame buffers' dynamic power consumption by up to approximately 70%. In our experiments using Optical Flow and MeanShift Tracking, representative high-level algorithms, data show that partitioning algorithms can reduce total power by up to 25% and 30%, respectively, without impacting performance.
Keywords: design; field programmable gate array (FPGA); image processing; memory; power.
Conflict of interest statement
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
Figures
















References
-
- Wang J., Zhong S., Yan L., Cao Z. An Embedded System-on-Chip Architecture for Real-time Visual Detection and Matching. IEEE Trans. Circuits Syst. Video Technol. 2014;24:525–538. doi: 10.1109/TCSVT.2013.2280040. - DOI
-
- Mondal P., Biswal P.K., Banerjee S. FPGA based accelerated 3D affine transform for real-time image processing applications. Comput. Electr. Eng. 2016;49:69–83. doi: 10.1016/j.compeleceng.2015.04.017. - DOI
-
- Wang W., Yan J., Xu N., Wang Y., Hsu F.H. Real-Time High-Quality Stereo Vision System in FPGA. IEEE Trans. Circuits Syst. Video Technol. 2015;25:1696–1708. doi: 10.1109/TCSVT.2015.2397196. - DOI
-
- Jin S., Cho J., Pham X.D., Lee K.M., Park S.K., Kim M., Jeon J.W. FPGA Design and Implementation of a Real-Time Stereo Vision System. IEEE Trans. Circuits Syst. Video Technol. 2010;20:15–26.
-
- Perri S., Frustaci F., Spagnolo F., Corsonello P. Design of Real-Time FPGA-based Embedded System for Stereo Vision; Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS); Florence, Italy. 27–30 May 2018; pp. 1–5.
Grants and funding
LinkOut - more resources
Full Text Sources