4.6 Article

Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing Its Gradient Estimator Bias

期刊

FRONTIERS IN NEUROSCIENCE
卷 15, 期 -, 页码 -

出版社

FRONTIERS MEDIA SA
DOI: 10.3389/fnins.2021.633674

关键词

equilibrium propagation; energy based models; biologically plausible deep learning; neuromorphic computing; on-chip learning; deep convolutional neural network; learning algorithms

资金

  1. European Research Council [682955]
  2. CIFAR
  3. NSERC
  4. Samsung
  5. European Research Council (ERC) [682955] Funding Source: European Research Council (ERC)

向作者/读者索取更多资源

Equilibrium Propagation is a biologically-inspired algorithm that trains convergent recurrent neural networks with a local learning rule, showing strong theoretical guarantees. By using symmetric nudging, the gradient bias in Equilibrium Propagation can be greatly reduced, allowing the training of deep convolutional neural networks.
Equilibrium Propagation is a biologically-inspired algorithm that trains convergent recurrent neural networks with a local learning rule. This approach constitutes a major lead to allow learning-capable neuromophic systems and comes with strong theoretical guarantees. Equilibrium propagation operates in two phases, during which the network is let to evolve freely and then nudged toward a target; the weights of the network are then updated based solely on the states of the neurons that they connect. The weight updates of Equilibrium Propagation have been shown mathematically to approach those provided by Backpropagation Through Time (BPTT), the mainstream approach to train recurrent neural networks, when nudging is performed with infinitely small strength. In practice, however, the standard implementation of Equilibrium Propagation does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient estimate of equilibrium propagation, inherent in the use of finite nudging, is responsible for this phenomenon and that canceling it allows training deep convolutional neural networks. We show that this bias can be greatly reduced by using symmetric nudging (a positive nudging and a negative one). We also generalize Equilibrium Propagation to the case of cross-entropy loss (by opposition to squared error). As a result of these advances, we are able to achieve a test error of 11.7% on CIFAR-10, which approaches the one achieved by BPTT and provides a major improvement with respect to the standard Equilibrium Propagation that gives 86% test error. We also apply these techniques to train an architecture with unidirectional forward and backward connections, yielding a 13.2% test error. These results highlight equilibrium propagation as a compelling biologically-plausible approach to compute error gradients in deep neuromorphic systems.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

Article Nanoscience & Nanotechnology

MicroMegascope

L. Canale, A. Laborieux, A. Aroul Mogane, L. Jubin, J. Comtet, A. Leine, L. Bocquet, A. Siria, A. Nigues

NANOTECHNOLOGY (2018)

Article Physics, Applied

Nonvolatile Ionic Modification of the Dzyaloshinskii-Moriya Interaction

L. Herrera Diez, Y. T. Liu, D. A. Gilbert, M. Belmeguenai, J. Vogel, S. Pizzini, E. Martinez, A. Lamperti, J. B. Mohammedi, A. Laborieux, Y. Roussigne, A. J. Grutter, E. Arenholtz, P. Quarterman, B. Maranville, S. Ono, M. Salah El Hadri, R. Tolley, E. E. Fullerton, L. Sanchez-Tejerina, A. Stashkevich, S. M. Cherif, A. D. Kent, D. Querlioz, J. Langer, B. Ocker, D. Ravelosona

PHYSICAL REVIEW APPLIED (2019)

Article Engineering, Electrical & Electronic

Implementation of Ternary Weights With Resistive RAM Using a Single Sense Operation Per Synapse

Axel Laborieux, Marc Bocquet, Tifenn Hirtzlin, Jacques-Olivier Klein, Etienne Nowak, Elisa Vianello, Jean-Michel Portal, Damien Querlioz

Summary: The paper focuses on the implementation of ternary neural networks and proposes a two-transistor/two-resistor memory architecture for achieving high energy efficiency at low supply voltage, while studying the bit error rate. Experimental results demonstrate that ternary neural networks can significantly improve neural network performance and exhibit immunity to bit errors.

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS (2021)

Article Multidisciplinary Sciences

Synaptic metaplasticity in binarized neural networks

Axel Laborieux, Maxence Ernoult, Tifenn Hirtzlin, Damien Querlioz

Summary: Deep neural networks are susceptible to catastrophic forgetting when learning new tasks. Laborieux et al. propose a training method inspired by neuronal metaplasticity for binarized neural networks to avoid forgetting, which is relevant for neuromorphic applications.

NATURE COMMUNICATIONS (2021)

Article Engineering, Electrical & Electronic

Model of the Weak Reset Process in HfOx Resistive Memory for Deep Learning Frameworks

Atreya Majumdar, Marc Bocquet, Tifenn Hirtzlin, Axel Laborieux, Jacques-Olivier Klein, Etienne Nowak, Elisa Vianello, Jean-Michel Portal, Damien Querlioz

Summary: This study presents a model of weak RESET process in hafnium oxide RRAM for deep learning and validates its effectiveness through experiments with hybrid CMOS/RRAM technology. The model is used to train binarized neural networks for image recognition tasks, showing that device-to-device variability is the most detrimental imperfection affecting the training process.

IEEE TRANSACTIONS ON ELECTRON DEVICES (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Low Power In-Memory Implementation of Ternary Neural Networks with Resistive RAM-Based Synapse

A. Laborieux, M. Bocquet, T. Hirtzlin, J-O Klein, L. Herrera Diez, E. Nowak, E. Vianello, J-M Portal, D. Querlioz

2020 2ND IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2020) (2020)

暂无数据