4.2 Article

Designing Deep Learning Hardware Accelerator and Efficiency Evaluation

Journal

COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE
Volume 2022, Issue -, Pages -

Publisher

HINDAWI LTD
DOI: 10.1155/2022/1291103

Keywords

-

Funding

  1. Xiamen University Malaysia Research Fund [XMUMRF/2022-C9/IECE/0033]

Ask authors/readers for more resources

This paper presents FPGA-based accelerator strategies for CNN computation and discusses the importance of parallel computing in the convolution algorithm. Experimental results show that the FPGA platform outperforms traditional computation strategies.
With the swift development of deep learning applications, the convolutional neural network (CNN) has brought a tremendous challenge to traditional processors to fulfil computing requirements. It is urgent to embrace new strategies to improve efficiency and diminish energy consumption. Currently, diverse accelerator strategies for CNN computation based on the field-programmable gate array (FPGA) platform have been gradually explored because they have edges of high parallelism, low power consumption, and better programmability. This paper first illustrates state-of-the-art FPGA-based accelerator design by emphasizing the contributions and limitations of existing research works. Subsequently, we demonstrated significant concepts of parallel computing (PC) in the convolution algorithm and discussed how to accomplish parallelism based on the FPGA hardware structure. Eventually, with the proposed CPU+FPGA framework, we performed experiments and compared the performance against traditional computation strategies in terms of the operation efficiency and energy consumption ratio. The results revealed that the efficiency of the FPGA platform is much higher than that of the central processing unit and graphics processing unit.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.2
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available