4.6 Article

Collective Structural Relaxation in Phase-Change Memory Devices

期刊

ADVANCED ELECTRONIC MATERIALS
卷 4, 期 9, 页码 -

出版社

WILEY
DOI: 10.1002/aelm.201700627

关键词

nonvolatile memory; phase-change materials; resistance drift; structural relaxation

资金

  1. DIASPORA project of the FP7-IAPP Marie Curie Action by the European Commission [610781]
  2. European Research Council grant PROJESTOR [682675]
  3. European Research Council grant NEUROMORPH [640003]
  4. German Science Foundation (DFG) through the Collaborative Research Center NANOSWITCHES [SFB 917]
  5. European Research Council (ERC) [640003, 682675] Funding Source: European Research Council (ERC)

向作者/读者索取更多资源

Phase-change memory devices are expected to play a key role in future computing systems as both memory and computing elements. A key challenge in this respect is the temporal evolution of the resistance levels commonly referred to as resistance drift. In this paper, a comprehensive description of resistance drift as a result of spontaneous structural relaxation of the amorphous phase-change material toward an energetically more favorable ideal glass state is presented. Molecular dynamics simulations provide insights into the microscopic origin of the structural relaxation. Based on those insights, a collective relaxation model is proposed to capture the kinetics of structural relaxa(t)ion. By linking the physical material parameters governing electrical transport to such a description of structural relaxation, an integrated drift model that is able to predict the current-voltage characteristics at any instance in time even during nontrivial temperature treatments is obtained. Accurate quantitative matching with experimental drift measurements over a wide range of time (10 decades) and temperature (160-420 K) is demonstrated.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

Article Multidisciplinary Sciences

Parallel convolutional processing using an integrated photonic tensor core

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, H. Bhaskaran

Summary: With the advancement of technology, the demand for fast processing of large amounts of data is increasing, making highly parallelized, fast, and scalable hardware crucial. The integration of photonics can serve as the optical analogue of an application-specific integrated circuit, enabling photonic in-memory computing and efficient computational hardware.

NATURE (2021)

Correction Multidisciplinary Sciences

Parallel convolutional processing using an integrated photonic tensor core (vol 589, pg 52, 2021)

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, H. Bhaskaran

NATURE (2021)

Article Engineering, Electrical & Electronic

Energy Efficient In-Memory Hyperdimensional Encoding for Spatio-Temporal Signal Processing

Geethan Karunaratne, Manuel Le Gallo, Michael Hersche, Giovanni Cherubini, Luca Benini, Abu Sebastian, Abbas Rahimi

Summary: The emerging brain-inspired computing paradigm, hyperdimensional computing (HDC), offers a lightweight learning framework for various cognitive tasks compared to traditional deep learning methods. This study proposes an architecture for processing spatio-temporal (ST) signals within the HDC framework using in-memory compute arrays, achieving significant energy efficiency, area, and throughput gains while maintaining peak classification accuracy.

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS (2021)

Article Multidisciplinary Sciences

Robust high-dimensional memory-augmented neural networks

Geethan Karunaratne, Manuel Schmuck, Manuel Le Gallo, Giovanni Cherubini, Luca Benini, Abu Sebastian, Abbas Rahimi

Summary: The paper proposes a novel architecture that utilizes computational memory units to perform analog in-memory computation on high-dimensional vectors, enhancing neural networks with explicit memory and achieving accuracy matching 32-bit software equivalent.

NATURE COMMUNICATIONS (2021)

Article Engineering, Electrical & Electronic

A Multi-Memristive Unit-Cell Array With Diagonal Interconnects for In-Memory Computing

Riduan Khaddam-Aljameh, Michele Martemucci, Benedikt Kersting, Manuel Le Gallo, Robert L. Bruce, Matthew BrightSky, Abu Sebastian

Summary: By designing unit-cell arrays and implementing diagonal connections, we have successfully addressed challenges such as parallel writing and computational precision in memristive crossbar arrays.

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS (2021)

Article Chemistry, Multidisciplinary

Antimony as a Programmable Element in Integrated Nanophotonics

Samarth Aggarwal, Tara Milne, Nikolaos Farmakidis, Johannes Feldmann, Xuan Li, Yu Shu, Zengguang Cheng, Martin Salinga, Wolfram H. P. Pernice, Harish Bhaskaran

Summary: The use of nonlinear elements with memory in photonic computing has gained significant interest due to the rise of artificial intelligence and machine learning. Phase change materials are commonly used for demonstrating the feasibility of such computing, but they suffer from slow switching speeds and phase segregation issues. In this study, we demonstrate reversible, ultrafast switching using sub-5 nm antimony thin films on an integrated photonic platform, with a retention time of tens of seconds. By programming seven distinct memory levels using subpicosecond pulses, this research suggests the potential use of these elements in ultrafast nanophotonic applications.

NANO LETTERS (2022)

Review Multidisciplinary Sciences

Memristive technologies for data storage, computation, encryption, and radio-frequency communication

Mario Lanza, Abu Sebastian, Wei D. Lu, Manuel Le Gallo, Meng-Fan Chang, Deji Akinwande, Francesco M. Puglisi, Husam N. Alshareef, Ming Liu, Juan B. Roldan

Summary: Memristive devices, which can change their resistance and memory state, have potential applications in various fields. However, there are still challenges to be addressed, including performance and reliability issues.

SCIENCE (2022)

Article Computer Science, Hardware & Architecture

ML-HW Co-Design of Noise-Robust TinyML Models and Always-On Analog Compute-in-Memory Edge Accelerator

Chuteng Zhou, Fernando Garcia Redondo, Julian Buchel, Irem Boybat, Xavier Timoneda Comas, S. R. Nandakumar, Shidhartha Das, Abu Sebastian, Manuel Le Gallo, Paul N. Whatmough

Summary: This article discusses the importance of high energy efficiency in always-on TinyML perception tasks in IoT applications and proposes the use of analog compute-in-memory (CiM) with nonvolatile memory (NVM) to achieve this goal. The authors introduce AnalogNets model architectures and a comprehensive training methodology to maintain accuracy in the presence of analog nonidealities and low-precision data converters. They also present AON-CiM, a programmable phase-change memory (PCM) analog CiM accelerator, designed to reduce the complexity and cost of interconnects. Evaluation results show promising accuracy and efficiency for KWS and VWW tasks.

IEEE MICRO (2022)

Article Multidisciplinary Sciences

Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators

Malte J. Rasch, Charles Mackin, Manuel Le Gallo, An Chen, Andrea Fasoli, Frederic Odermatt, Ning Li, S. R. Nandakumar, Pritish Narayanan, Hsinyu Tsai, Geoffrey W. Burr, Abu Sebastian, Vijay Narayanan

Summary: In this study, a hardware-aware retraining approach is developed to examine the accuracy of analog in-memory computing across multiple network topologies and investigate sensitivity and robustness to a broad set of nonidealities. By introducing a realistic crossbar model, significant improvement is achieved compared to earlier retraining approaches. The results show that many larger-scale deep neural networks can be successfully retrained to show iso-accuracy with the floating point implementation, and nonidealities that add noise to the inputs or outputs have the largest impact on accuracy.

NATURE COMMUNICATIONS (2023)

Article Engineering, Electrical & Electronic

A 64-core mixed-signal in-memory compute chip based on phase-change memory for deep neural network inference

Manuel Le Gallo, Riduan Khaddam-Aljameh, Milos Stanisavljevic, Athanasios Vasilopoulos, Benedikt Kersting, Martino Dazzi, Geethan Karunaratne, Matthias Brandli, Abhairaj Singh, Silvia M. Mueller, Julian Buchel, Xavier Timoneda, Vinay Joshi, Malte J. Rasch, Urs Egger, Angelo Garofalo, Anastasios Petropoulos, Theodore Antonakopoulos, Kevin Brew, Samuel Choi, Injo Ok, Timothy Philip, Victor Chan, Claire Silvestre, Ishtiaq Ahsan, Nicole Saulnier, Vijay Narayanan, Pier Andrea Francese, Evangelos Eleftheriou, Abu Sebastian

Summary: A multicore analogue in-memory computing chip designed and fabricated in 14 nm complementary metal-oxide-semiconductor technology with backend-integrated phase-change memory is reported. It can be used to reduce the latency and energy consumption of deep neural network inference tasks by performing computations within memory. The chip features interconnection of 64 AIMC cores via an on-chip communication network and implements digital activation functions and additional processing involved in individual convolutional layers and long short-term memory units.

NATURE ELECTRONICS (2023)

Article Engineering, Electrical & Electronic

2022 roadmap on neuromorphic computing and engineering

Dennis Christensen, Regina Dittmann, Bernabe Linares-Barranco, Abu Sebastian, Manuel Le Gallo, Andrea Redaelli, Stefan Slesazeck, Thomas Mikolajick, Sabina Spiga, Stephan Menzel, Ilia Valov, Gianluca Milano, Carlo Ricciardi, Shi-Jun Liang, Feng Miao, Mario Lanza, Tyler J. Quill, Scott T. Keene, Alberto Salleo, Julie Grollier, Danijela Markovic, Alice Mizrahi, Peng Yao, J. Joshua Yang, Giacomo Indiveri, John Paul Strachan, Suman Datta, Elisa Vianello, Alexandre Valentian, Johannes Feldmann, Xuan Li, Wolfram H. P. Pernice, Harish Bhaskaran, Steve Furber, Emre Neftci, Franz Scherr, Wolfgang Maass, Srikanth Ramaswamy, Jonathan Tapson, Priyadarshini Panda, Youngeun Kim, Gouhei Tanaka, Simon Thorpe, Chiara Bartolozzi, Thomas A. Cleland, Christoph Posch, Shihchii Liu, Gabriella Panuccio, Mufti Mahmud, Arnab Neelim Mazumder, Morteza Hosseini, Tinoosh Mohsenin, Elisa Donati, Silvia Tolu, Roberto Galeazzi, Martin Ejsing Christensen, Sune Holm, Daniele Ielmini, N. Pryds

Summary: This article introduces the characteristics and advantages of von Neumann architecture and neuromorphic computing systems. While traditional von Neumann architecture is powerful, it has high power consumption and cannot handle complex data. Neuromorphic computing systems, inspired by biological concepts, can achieve lower power consumption for storing and processing large amounts of digital information. The aim of this article is to provide perspectives on the current state and future challenges in the field of neuromorphic technology, and to provide a concise yet comprehensive introduction and future outlook for readers.

NEUROMORPHIC COMPUTING AND ENGINEERING (2022)

Article Engineering, Electrical & Electronic

Precision of bit slicing with in-memory computing based on analog phase-change memory crossbars

Manuel Le Gallo, S. R. Nandakumar, Lazar Ciric, Irem Boybat, Riduan Khaddam-Aljameh, Charles Mackin, Abu Sebastian

Summary: In-memory computing is an efficient approach that utilizes the physical attributes of memory devices to perform computational tasks. However, the computational accuracy of this approach is currently limited due to inter-device variability, inhomogeneity, and randomness in analog memory devices. Bit slicing, a technique for constructing high precision processors, shows promise in overcoming this limitation. This study assesses the computational error in in-memory matrix-vector multiplications using bit slicing, and emphasizes the need to minimize analog matrix representation error through averaging within a given dynamic range.

NEUROMORPHIC COMPUTING AND ENGINEERING (2022)

Proceedings Paper Engineering, Multidisciplinary

Mushroom-type phase change memory with projection liner: an array-level demonstration of conductance drift and noise mitigation

R. L. Bruce, S. Ghazi Sarwat, I Boybat, C-W Cheng, W. Kim, S. R. Nandakumar, C. Mackin, T. Philip, Z. Liu, K. Brew, N. Gong, I Ok, P. Adusumilli, K. Spoon, S. Ambrogio, B. Kersting, T. Bohnstingl, M. Le Gallo, A. Simon, N. Li, I Saraf, J-P Han, L. Gignac, J. M. Papalia, T. Yamashita, N. Saulnier, G. W. Burr, H. Tsai, A. Sebastian, V Narayanan, M. BrightSky

Summary: Phase change memory (PCM) is being considered for non-von Neumann accelerators for deep neural networks based on in-memory computing. Conductance drift and noise are key challenges for reliable storage of synaptic weights in such accelerators. The integration of a projection liner into multilevel mushroom-type PCM devices demonstrates mitigation of conductance drift and noise, with further improvement shown by combining with a low-drift phase-change material. Large-scale experiments confirm lower drift and device-to-device drift variability for devices with projection liner, crucial for in-memory computing accelerators.

2021 IEEE INTERNATIONAL RELIABILITY PHYSICS SYMPOSIUM (IRPS) (2021)

暂无数据