Article
Computer Science, Hardware & Architecture
Wenzhe Guo, Mohammed E. Fouda, Ahmed M. Eltawil, Khaled Nabil Salama
Summary: This study proposes a spiking neuromorphic hardware system with on-chip local learning capability, which is scalable, fast, and efficient, and demonstrates competitive accuracy in different tasks. The method outperforms existing methods in various performance metrics and has linear scaling ability, showing great potential in various applications.
IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS
(2022)
Article
Telecommunications
Nicolas Skatchkovsky, Hyeryung Jang, Osvaldo Simeone
Summary: This paper introduces the operation principle, training algorithms, and models of Spiking Neural Networks (SNNs) and compares two main methods. To address the non-differentiability of the spiking mechanism, a differentiable function approximating the threshold activation function is proposed, and an alternative method based on probability models is discussed. Finally, experimental results on accuracy and convergence under different SNN models are provided.
IEEE COMMUNICATIONS LETTERS
(2021)
Article
Telecommunications
Nicolas Skatchkovsky, Hyeryung Jang, Osvaldo Simeone
Summary: The synergies between wireless communications and artificial intelligence are increasingly driving research at the intersection of the two fields. Machine learning can address algorithm and model deficits in the optimization of communication protocols, but implementing ML models on devices connected via bandwidth-constrained channels remains challenging.
IEEE COMMUNICATIONS LETTERS
(2021)
Article
Engineering, Electrical & Electronic
Congyang Liu, Ziyi Yang, Xin Zhang, Zikai Zhu, Haoming Chu, Yuxiang Huan, Li-Rong Zheng, Zhuo Zou
Summary: In this work, a neuromorphic processor is presented for AIoT applications, featuring low-power consumption, small footprint, STDP-based online learning, and the ability to adapt to multiple applications. The use of hybrid precision, precision-configurable neuron units, unified memory architecture, and dynamic pruning technique contributes to improved energy and time efficiency in training and inference.
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS
(2023)
Article
Computer Science, Artificial Intelligence
Hyeryung Jang, Osvaldo Simeone
Summary: Spiking neural networks (SNNs) capture the efficiency of biological brains for inference and learning via dynamic, online, and event-driven processing of binary time series. Probabilistic SNN models can be trained directly using principled online, local, and update rules, and have the ability to generate independent outputs when queried with the same input. They can robustify decisions and quantify uncertainty during inference.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
(2022)
Article
Computer Science, Artificial Intelligence
Manon Dampfhoffer, Thomas Mesquida, Alexandre Valentian, Lorena Anghel
Summary: With the widespread use of smart systems, artificial neural networks (ANNs) have been widely adopted. However, the high energy consumption of traditional ANN implementations limits their applications in embedded and mobile systems. Spiking neural networks (SNNs) are gaining interest as low-power alternatives to ANNs due to their ability to mimic the dynamics of biological neural networks. However, training SNNs using backpropagation-based techniques is challenging due to the discrete representation of information.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
(2023)
Article
Computer Science, Artificial Intelligence
Hunjun Lee, Chanmyeong Kim, Seungho Lee, Eunjin Baek, Jangwoo Kim
Summary: This study introduces a full-stack evaluation methodology for Spiking Neural Networks (SNNs) to accurately assess their cost-effectiveness and make fair comparisons against Artificial Neural Networks (ANNs).
Article
Neurosciences
Nicholas LeBow, Bodo Rueckauer, Pengfei Sun, Meritxell Rovira, Cecilia Jimenez-Jorquera, Shih-Chii Liu, Josep Maria Margarit-Taule
Summary: The research team has successfully implemented artificial taste on neuromorphic hardware for continuous edge monitoring applications. They used solid-state electrochemical microsensor array for data acquisition and employed temporal filtering and deep convolutional spiking neural network to enhance performance. The implementation is 15 times more energy efficient on inference tasks than similar convolutional architectures running on other commercial, low power edge-AI inference devices, achieving a high accuracy rate of 97%.
FRONTIERS IN NEUROSCIENCE
(2021)
Article
Computer Science, Information Systems
Ali Siddique, Mang I. Vai, Sio Hang Pun
Summary: In this article, an FPGA-based neuromorphic system capable of training SNNs efficiently is presented. The system adopts Tempotron learning rule and population coding to achieve high classification accuracy. By integrating both integrate-and-fire and leaky integrate-and-fire neurons, cost efficiency is blended with high throughput. Experimental results show significant speedup compared to a general-purpose Von-Neumann device, and the proposed design achieves a maximum throughput of about 2460x106 4-input samples per second.
CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS
(2023)
Article
Computer Science, Hardware & Architecture
Sarah A. El-Sayed, Theofilos Spyrou, Luis A. Camunas-Mesa, Haralampos-G Stratigopoulos
Summary: This paper addresses the issue of testing AI hardware accelerators that implement SNNs. The authors propose a metric to rank training and testing samples based on their fault detection capability, measuring the interclass spike count difference. They show that samples with low scores achieve high fault coverage and demonstrate that retaining a set of high-ranked samples results in near-perfect fault coverage for critical faults. The proposed approach is tested on Python models and actual neuromorphic hardware, and fault modeling is discussed to reduce test generation time.
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS
(2023)
Article
Automation & Control Systems
Guisong Liu, Wenjie Deng, Xiurui Xie, Li Huang, Huajin Tang
Summary: This article proposes a new deep spiking reinforcement learning method, which combines deep Q network with leaky integrate-and-fire neurons to achieve direct training based on spiking neural networks. The experiment proves that this method has superior performance, stability, generalization ability and energy efficiency on multiple Atari games.
IEEE TRANSACTIONS ON CYBERNETICS
(2023)
Article
Computer Science, Hardware & Architecture
Adarsha Balaji, Shihao Song, Anup Das, Jeffrey Krichmar, Nikil Dutt, James Shackleford, Nagarajan Kandasamy, Francky Catthoor
Summary: With increasing model complexity, mapping SNN-based applications to tile-based neuromorphic hardware becomes more challenging due to limited synaptic storage resources. The proposed unrolling technique decomposes neuron functions into homogeneous neural units, improving crossbar utilization and preserving all presynaptic connections without loss in model quality. Integrating this technique in existing SNN mapping framework results in significant reduction in crossbar requirement, higher synapse utilization, lower wasted energy on hardware, and improved model quality.
IEEE EMBEDDED SYSTEMS LETTERS
(2021)
Article
Computer Science, Artificial Intelligence
Yangfan Hu, Huajin Tang, Gang Pan
Summary: Spiking neural networks (SNNs) have gained attention for their biological plausibility and their computational power comparable to traditional artificial neural networks (ANNs). This paper proposes an efficient approach to build deep SNNs and demonstrates its state-of-the-art performance on multiple tasks.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
(2023)
Article
Computer Science, Artificial Intelligence
Malu Zhang, Jiadong Wang, Jibin Wu, Ammar Belatreche, Burin Amornpaisannon, Zhixuan Zhang, Venkata Pavan Kumar Miriyala, Hong Qu, Yansong Chua, Trevor E. Carlson, Haizhou Li
Summary: The study introduces a spike timing-based learning algorithm for training DeepSNNs, achieving state-of-the-art classification accuracy, and demonstrates ultralow-power inference operations on neuromorphic hardware. Results highlight the importance of spike timing dynamics in information encoding, synaptic plasticity, and decision-making, providing a new perspective for future DeepSNNs and neuromorphic hardware design.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
(2022)
Article
Neurosciences
Miao Yu, Tingting Xiang, P. Srivatsa, Kyle Timothy Ng Chu, Burin Amornpaisannon, Yaswanth Tavva, Venkata Pavan Kumar Miriyala, Trevor E. Carlson
Summary: Spiking neural networks (SNNs) have the potential to be an efficient alternative to artificial neural networks (ANNs) due to their sparse and low-energy design. However, the accuracy of time-to-first-spike (TTFS) coding in SNNs is not as good as rate-based SNNs. In this study, we propose a novel optimization algorithm and hardware accelerator to improve the accuracy of TTFS-based SNNs and reduce power consumption.
FRONTIERS IN NEUROSCIENCE
(2023)
Article
Engineering, Electrical & Electronic
Xinxin Wang, Reid Pinkham, Mohammed A. Zidan, Fan-Hsuan Meng, Michael P. Flynn, Zhengya Zhang, Wei D. Lu
Summary: TAICHI is a general in-memory computing deep neural network accelerator design that uses RRAM crossbar arrays and integrates local arithmetic units and global co-processors. It efficiently maps different models while maintaining high energy efficiency and throughput. A hierarchical mesh network-on-chip is implemented to balance reconfigurability and efficiency. The system performance is estimated at several technology nodes and the heterogeneous design allows for larger models than the on-chip storage capacity.
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS
(2022)
Article
Engineering, Electrical & Electronic
Yuting Wu, Xinxin Wang, Wei D. Lu
Summary: Memristive devices have rich dynamics due to internal state variables determining conductance, which may serve as building blocks for biofaithful neuromorphic systems; these devices can be utilized in compute-in-memory architectures to process temporal data efficiently, implementing synaptic and neuronal functions effectively.
SEMICONDUCTOR SCIENCE AND TECHNOLOGY
(2022)
Article
Nanoscience & Nanotechnology
Sangmin Yoo, Yuting Wu, Yongmo Park, Wei D. Lu
Summary: Memristive devices exhibit rich switching behaviors similar to synaptic functions, making them ideal for constructing efficient neuromorphic systems. The internal temperature serves as a second state variable to regulate ion motion and allows for the native implementation of timing- and rate-based learning rules. This study demonstrates that by engineering the internal temperature in a Ta2O5-based memristor, it is possible to tune its spike timing dependent plasticity (STDP) characteristics. When combined with an artificial post-synaptic neuron, the second-order memristor synapses can capture the temporal correlation in input streaming events.
ADVANCED ELECTRONIC MATERIALS
(2022)
Article
Nanoscience & Nanotechnology
Jason K. Eshraghian, Xinxin Wang, Wei D. Lu
IEEE NANOTECHNOLOGY MAGAZINE
(2022)
Article
Engineering, Electrical & Electronic
Ziyu Wang, Xiaojian Zhu, Supreet Jeloka, Brian Cline, Wei D. Lu
Summary: In this letter, a physical unclonable function (PUF) system based on fingerprint-like random planar structures is demonstrated through pattern transfer of self-assembled binary polymer mixtures. The PUF achieves different types of conductance distributions with large variations, allowing it to operate in differential or on/off modes. The uniqueness of the PUF is verified using inter Hamming distance and entropy at different temperatures. The proposed fingerprint PUF is resistant to machine learning attacks based on Fourier extrapolation and is compatible with back-end-of-line (BEOL) process, offering potential for hardware security in the IoT industry.
IEEE ELECTRON DEVICE LETTERS
(2022)
Review
Multidisciplinary Sciences
Mario Lanza, Abu Sebastian, Wei D. Lu, Manuel Le Gallo, Meng-Fan Chang, Deji Akinwande, Francesco M. Puglisi, Husam N. Alshareef, Ming Liu, Juan B. Roldan
Summary: Memristive devices, which can change their resistance and memory state, have potential applications in various fields. However, there are still challenges to be addressed, including performance and reliability issues.
Review
Nanoscience & Nanotechnology
Suhas Kumar, Xinxin Wang, John Paul Strachan, Yuchao Yang, Wei D. Lu
Summary: Research on electronic devices and materials is driven by the slowdown of transistor scaling and the growth of computing needs. Using devices like memristors to achieve complex dynamics enables new computing architectures with high energy efficiency and computing capacity.
NATURE REVIEWS MATERIALS
(2022)
Article
Engineering, Electrical & Electronic
Peng Zhou, Dong-Uk Choi, Wei D. Lu, Sung-Mo Kang, Jason K. Eshraghian
Summary: We present MEMprop, a gradient-based learning method for training fully memristive spiking neural networks. By harnessing the device dynamics to trigger voltage spikes, MEMprop eliminates the need for surrogate gradient methods. The implementation is fully memristive, without the need for additional circuits to implement spiking dynamics, and achieves competitive accuracy on several benchmarks.
IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS
(2022)
Article
Engineering, Electrical & Electronic
Fan-Hsuan Meng, Xinxin Wang, Ziyu Wang, Eric Yeu-Jer Lee, Wei D. Lu
Summary: This paper presents how structured pruning can be efficiently implemented in CIM systems based on RRAM crossbars, improving accuracy and reducing hardware cost.
IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS
(2022)
Article
Computer Science, Hardware & Architecture
Yongmo Park, Ziyu Wang, Sangmin Yoo, Wei D. D. Lu
Summary: As cloud computing resources are increasingly used in machine learning, privacy-preserving techniques like homomorphic encryption (HE) have attracted attention. However, implementing deep neural networks (DNN) using HE is significantly slower than plaintext implementations.
IEEE JOURNAL ON EXPLORATORY SOLID-STATE COMPUTATIONAL DEVICES AND CIRCUITS
(2022)
Article
Computer Science, Hardware & Architecture
Robert P. Dick, Rob Aitken, Jace Mogill, John Paul Strachan, Kirk Bresniker, Wei Lu, Yorie Nakahira, Zhiyong Li, Matthew J. Marinella, William Severa, A. Alec Talin, Craig M. Vineyard, Suhas Kumar, Christian Mailhiot, Lennie Klebanoff
Summary: Fully automated retail vehicles will require stringent real-time operational safety, sensing, communication, inference, planning, and control capabilities. However, existing technologies may not meet these requirements due to excessive energy consumption and thermal management issues. This article summarizes the research challenges in designing computationally energy-efficient, cost-effective, safe, and reliable automated retail vehicles.
Article
Chemistry, Multidisciplinary
Yuting Wu, Qiwen Wang, Ziyu Wang, Xinxin Wang, Buvna Ayyagari, Siddarth Krishnan, Michael Chudzik, Wei D. Lu
Summary: This paper proposes a mixed-precision training scheme using bulk-switching memristor-based compute-in-memory (CIM) modules to accelerate deep neural network (DNN) training. Low-precision CIM modules are used for fast computation while high-precision weight updates are accumulated in digital units. Experimental results show that this scheme enables efficient mixed-precision DNN training with comparable accuracy to software-trained models.
ADVANCED MATERIALS
(2023)
Article
Automation & Control Systems
Ziyu Wang, Yuting Wu, Yongmo Park, Sangmin Yoo, Xinxin Wang, Jason K. Eshraghian, Wei D. Lu
Summary: Analog compute-in-memory (CIM) systems have the potential to accelerate deep neural network (DNN) inference. However, this study identifies a security vulnerability wherein an attacker can reconstruct the user's private input data from a power side-channel attack, even without knowledge of the stored DNN model. The study proposes an attack approach using a generative adversarial network that achieves high-quality data reconstruction from power leakage measurements. The results demonstrate the effectiveness of the attack methodology in reconstructing user input data from power leakage of the analog CIM accelerator, even at high noise levels and after countermeasures.
ADVANCED INTELLIGENT SYSTEMS
(2023)
Proceedings Paper
Engineering, Electrical & Electronic
Y. Wu, F. Cai, L. Thomas, T. Liu, A. Nourbakhsh, J. Hebding, E. Smith, R. Quon, R. Smith, A. Kumar, A. Pang, J. Holt, R. Someshwar, F. Nardi, J. Anthis, S-H. Yen, C. Chevallier, A. Uppala, X. Chen, N. Breil, T. Sherwood, K. Wong, W. Cho, D. Thompson, J. Hsu, B. Ayyagari, S. Krishnan, Wei. D. Lu, M. Chudzik
Summary: This paper presents a forming-free bulk ReRAM cell with high programming accuracy and yield, showing superior analog behavior for memory computing applications.
2022 INTERNATIONAL ELECTRON DEVICES MEETING, IEDM
(2022)