Journal
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS
Volume 67, Issue 9, Pages 1534-1538Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSII.2020.3013336
Keywords
ReRAM; computing-in-memory; AIoT application; CNN; convolutional layer; edge computing; artificial intelligence
Categories
Funding
- National Key Research Plan of China [2018YFB0407500]
- National Major Science and Technology Special Project [2017ZX01028101-303]
- National Natural Science Foundation of China [61720106013]
Ask authors/readers for more resources
To reduce the energy-consuming and time latency incurred by Von Neumann architecture, this brief developed a complete computing-in-memory (CIM) convolutional macro based on ReRAM array for the convolutional layers of a LeNet-like convolutional neural network (CNN). We binarized the input layer and the first convolutional layer to get higher accuracy. The proposed ReRAM-CIM convolutional macro is suitable as an IP core for any binarized neural networks' convolutional layers. This brief customized a bit-cell consisting of 2T2R ReRAM cells, regarded 9 x 8 bit-cells as one unit to achieve high hardware compute accuracy, great read/compute speed, and low power consuming. The ReRAM-CIM convolutional macro achieved 50 ns product-sum computing time for one complete convolutional operation in a convolutional layer in the customized CNN, with an accuracy of 96.96% on MNIST database and a peak energy efficiency of 58.82 TOPS/W.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available