Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 May 11;23(10):4667.
doi: 10.3390/s23104667.

Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI

Affiliations

Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI

Mara Pistellato et al. Sensors (Basel). .

Abstract

Over the past few years, several applications have been extensively exploiting the advantages of deep learning, in particular when using convolutional neural networks (CNNs). The intrinsic flexibility of such models makes them widely adopted in a variety of practical applications, from medical to industrial. In this latter scenario, however, using consumer Personal Computer (PC) hardware is not always suitable for the potential harsh conditions of the working environment and the strict timing that industrial applications typically have. Therefore, the design of custom FPGA (Field Programmable Gate Array) solutions for network inference is gaining massive attention from researchers and companies as well. In this paper, we propose a family of network architectures composed of three kinds of custom layers working with integer arithmetic with a customizable precision (down to just two bits). Such layers are designed to be effectively trained on classical GPUs (Graphics Processing Units) and then synthesized to FPGA hardware for real-time inference. The idea is to provide a trainable quantization layer, called Requantizer, acting both as a non-linear activation for neurons and a value rescaler to match the desired bit precision. This way, the training is not only quantization-aware, but also capable of estimating the optimal scaling coefficients to accommodate both the non-linear nature of the activations and the constraints imposed by the limited precision. In the experimental section, we test the performance of this kind of model while working both on classical PC hardware and a case-study implementation of a signal peak detection device running on a real FPGA. We employ TensorFlow Lite for training and comparison, and use Xilinx FPGAs and Vivado for synthesis and implementation. The results show an accuracy of the quantized networks close to the floating point version, without the need for representative data for calibration as in other approaches, and performance that is better than dedicated peak detection algorithms. The FPGA implementation is able to run in real time at a rate of four gigapixels per second with moderate hardware resources, while achieving a sustained efficiency of 0.5 TOPS/W (tera operations per second per watt), in line with custom integrated hardware accelerators.

Keywords: FPGA; edge AI; peak-detection; quantization-aware training; quantized CNN.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Kernel partial sum computation module. The multipliers are followed by a pipelined adder tree to compute a partial sum.
Figure 2
Figure 2
Sliced kernel computation. Different slices are connected in sequence to sequentially compute the full kernel.
Figure 3
Figure 3
Weight buffers. A shift register is used to give the correct sequence of weights to the multipliers at every clock cycle.
Figure 4
Figure 4
Partial sum accumulation with non-unit rate. Delay registers are used to store partial results as the kernels take turns computing in the partial sum module.
Figure 5
Figure 5
Value re-normalization between two layers. Integer arithmetic is used in this phase, by shifting the results by appropriate values.
Figure 6
Figure 6
Implementation of the max-pooling operator. Outputs are compared to the previous result by a delay that depends on the rate at which the stage is operating.
Figure 7
Figure 7
Aggregation module. Outputs are staged in temporary registers until all values are ready to be transferred to the next layer.
Figure 8
Figure 8
Rate sharing: different kernels are computed along all channels in different clock cycles.
Figure 9
Figure 9
Kernel decomposition: different channels are computed for all kernels in different clock cycles.
Figure 10
Figure 10
Accumulation module. An accumulator is used to store the partial result within the same slice.
Figure 11
Figure 11
Some examples of one-dimensional signals (in black) from our synthetic dataset generated to simulate camera scanlines for the peak detection network. The displayed samples are taken from the test set and each colored stripe denotes the predicted classes for each signal portion, namely: no peak, left peak and right peak. The red cross denotes the peak location computed from the given prediction.
Figure 12
Figure 12
Accuracy values for the proposed method and different quantization levels. The leftmost value “Float” indicates the original network with no quantization, while other values show the accuracy obtained decreasing the quantization from 8 down to 4 bits.
Figure 13
Figure 13
Example of a real acquired image with a laser line hitting a metallic object formed by three planar parts (top) and a line detail (bottom) with peaks detected by our 4-bit quantized network plotted as red points.

References

    1. Sarwar M., Murphy C., Hou D., Khan N., Ananthanarayanan G., Hussain F. Machine Learning at the Network Edge: A Survey. ACM Comput. Surv. 2022;54:170.
    1. Kar G., Jain S., Gruteser M., Bai F., Govindan R. Real-Time traffic estimation at vehicular edge nodes; Proceedings of the 2017 2nd ACM/IEEE Symposium on Edge Computing, SEC 2017; San Jose, CA, USA. 12–14 October 2017; - DOI
    1. Passerone R., Cancila D., Albano M., Mouelhi S., Plosz S., Jantunen E., Ryabokon A., Laarouchi E., Hegedus C., Varga P. A Methodology for the Design of Safety-Compliant and Secure Communication of Autonomous Vehicles. IEEE Access. 2019;7:125022–125037. doi: 10.1109/ACCESS.2019.2937453. - DOI
    1. De Chamisso F.M., Cancila D., Soulier L., Passerone R., Aupetit M. Lifelong Exploratory Navigation: An Architecture for Safer Mobile Robots. IEEE Des. Test. 2021;38:57–64. doi: 10.1109/MDAT.2019.2952347. - DOI
    1. Barnell M., Raymond C., Isereau D., Capraro C., Cote E. Utilizing high-performance embedded computing, agile condor, for intelligent processing: An artificial intelligence platform for remotely piloted aircraft; Proceedings of the 2017 Intelligent Systems Conference, IntelliSys; London, UK. 7–8 September 2017; pp. 1155–1159. - DOI

LinkOut - more resources