4.2 Article

A Runtime Reconfigurable Design of Compute-in-Memory-Based Hardware Accelerator for Deep Learning Inference

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3460436

Keywords

Convolutional neural network; hardware accelerator; compute-in-memory; reconfigurable architecture

Funding

  1. ASCENT, one of the SRC/DARPA JUMP Centers
  2. [NSF-CCF-1903951]

Ask authors/readers for more resources

The article proposes a runtime reconfigurable design methodology for CIM-based accelerators to support a class of convolutional neural networks on one prefabricated chip instance with ASIC-like efficiency. Several design aspects are investigated, including the reconfigurable weight mapping method, the input side of data transmission, mainly about the weight reloading, and the output side of data processing, mainly about the reconfigurable accumulation.
Compute-in-memory (CIM) is an attractive solution to address the memory wall challenges for the extensive computation in deep learning hardware accelerators. For custom ASIC design, a specific chip instance is restricted to a specific network during runtime. However, the development cycle of the hardware is normally far behind the emergence of new algorithms. Although some of the reported CIM-based architectures can adapt to different deep neural network (DNN) models, few details about the dataflow or control were disclosed to enable such an assumption. Instruction set architecture (ISA) could support high flexibility, but its complexity would be an obstacle to efficiency. In this article, a runtime reconfigurable design methodology of CIM-based accelerators is proposed to support a class of convolutional neural networks running on one prefabricated chip instance with ASIC-like efficiency. First, several design aspects are investigated: (1) the reconfigurable weight mapping method; (2) the input side of data transmission, mainly about the weight reloading; and (3) the output side of data processing, mainly about the reconfigurable accumulation. Then, a system-level performance benchmark is performed for the inference of different DNN models, such as VGG-8 on a CIFAR-10 dataset and AlexNet GoogLeNet, ResNet-18, and DenseNet-121 on an ImageNet dataset to measure the trade-offs between runtime reconfigurability, chip area, memory utilization, throughput, and energy efficiency.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.2
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available