4.6 Article

Evidence for thermally assisted threshold switching behavior in nanoscale phase-change memory cells

期刊

JOURNAL OF APPLIED PHYSICS
卷 119, 期 2, 页码 -

出版社

AMER INST PHYSICS
DOI: 10.1063/1.4938532

关键词

-

资金

  1. EU

向作者/读者索取更多资源

In spite of decades of research, the details of electrical transport in phase-change materials are still debated. In particular, the so-called threshold switching phenomenon that allows the current density to increase steeply when a sufficiently high voltage is applied is still not well understood, even though there is wide consensus that threshold switching is solely of electronic origin. However, the high thermal efficiency and fast thermal dynamics associated with nanoscale phase-change memory (PCM) devices motivate us to reassess a thermally assisted threshold switching mechanism, at least in these devices. The time/temperature dependence of the threshold switching voltage and current in doped Ge2Sb2Te5 nanoscale PCM cells was measured over 6 decades in time at temperatures ranging from 40 degrees C to 160 degrees C. We observe a nearly constant threshold switching power across this wide range of operating conditions. We also measured the transient dynamics associated with threshold switching as a function of the applied voltage. By using a field-and temperature-dependent description of the electrical transport combined with a thermal feedback, quantitative agreement with experimental data of the threshold switching dynamics was obtained using realistic physical parameters. (C) 2016 AIP Publishing LLC.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

Article Multidisciplinary Sciences

Parallel convolutional processing using an integrated photonic tensor core

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, H. Bhaskaran

Summary: With the advancement of technology, the demand for fast processing of large amounts of data is increasing, making highly parallelized, fast, and scalable hardware crucial. The integration of photonics can serve as the optical analogue of an application-specific integrated circuit, enabling photonic in-memory computing and efficient computational hardware.

NATURE (2021)

Correction Multidisciplinary Sciences

Parallel convolutional processing using an integrated photonic tensor core (vol 589, pg 52, 2021)

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, H. Bhaskaran

NATURE (2021)

Article Chemistry, Multidisciplinary

Projected Mushroom Type Phase-Change Memory

Syed Ghazi Sarwat, Timothy M. Philip, Ching-Tzu Chen, Benedikt Kersting, Robert L. Bruce, Cheng-Wei Cheng, Ning Li, Nicole Saulnier, Matthew BrightSky, Abu Sebastian

Summary: Phase-change memory devices are utilized in in-memory computing to compute without needing to transfer data between memory and processing units. The projection of phase configurations onto stable elements within the device is a promising approach to address nonidealities. By investigating the projection mechanism in prominent phase-change memory device architectures, such as the mushroom-type phase-change memory, the key attributes and operational principles of nanoscale projected Ge2Sb2Te5 devices are understood.

ADVANCED FUNCTIONAL MATERIALS (2021)

Article Chemistry, Multidisciplinary

Measurement of Onset of Structural Relaxation in Melt-Quenched Phase Change Materials

Benedikt Kersting, Syed Ghazi Sarwat, Manuel Le Gallo, Kevin Brew, Sebastian Walfort, Nicole Saulnier, Martin Salinga, Abu Sebastian

Summary: Chalcogenide phase change materials are utilized for non-volatile, low-latency storage-class memory and new forms of computing, but face challenges with temporal drift in electrical resistance. Research shows that the efficacy of observation is influenced by the observable timescale, and experimental measurements of drift onset can be conducted using threshold-switching voltage. This additional feature of structural relaxation dynamics serves as a new benchmark for evaluating classical models explaining drift.

ADVANCED FUNCTIONAL MATERIALS (2021)

Article Computer Science, Hardware & Architecture

Efficient Pipelined Execution of CNNs Based on In-Memory Computing and Graph Homomorphism Verification

Martino Dazzi, Abu Sebastian, Thomas Parnell, Pier Andrea Francese, Luca Benini, Evangelos Eleftheriou

Summary: In-memory computing is a new computing paradigm that enables deep-learning inference with higher energy-efficiency and lower latency. Communication fabric is a key challenge in this paradigm, and we propose a graph-based communication structure suitable for convolutional neural networks, achieving efficient pipelined execution. Our proposed topology shows lower bandwidth requirements per communication channel compared to established communication topologies, and we demonstrate a hardware implementation mapping ResNet-32 onto an IMC core array interconnected via this communication fabric.

IEEE TRANSACTIONS ON COMPUTERS (2021)

Article Engineering, Electrical & Electronic

A Multi-Memristive Unit-Cell Array With Diagonal Interconnects for In-Memory Computing

Riduan Khaddam-Aljameh, Michele Martemucci, Benedikt Kersting, Manuel Le Gallo, Robert L. Bruce, Matthew BrightSky, Abu Sebastian

Summary: By designing unit-cell arrays and implementing diagonal connections, we have successfully addressed challenges such as parallel writing and computational precision in memristive crossbar arrays.

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS (2021)

Review Multidisciplinary Sciences

Memristive technologies for data storage, computation, encryption, and radio-frequency communication

Mario Lanza, Abu Sebastian, Wei D. Lu, Manuel Le Gallo, Meng-Fan Chang, Deji Akinwande, Francesco M. Puglisi, Husam N. Alshareef, Ming Liu, Juan B. Roldan

Summary: Memristive devices, which can change their resistance and memory state, have potential applications in various fields. However, there are still challenges to be addressed, including performance and reliability issues.

SCIENCE (2022)

Article Computer Science, Hardware & Architecture

ML-HW Co-Design of Noise-Robust TinyML Models and Always-On Analog Compute-in-Memory Edge Accelerator

Chuteng Zhou, Fernando Garcia Redondo, Julian Buchel, Irem Boybat, Xavier Timoneda Comas, S. R. Nandakumar, Shidhartha Das, Abu Sebastian, Manuel Le Gallo, Paul N. Whatmough

Summary: This article discusses the importance of high energy efficiency in always-on TinyML perception tasks in IoT applications and proposes the use of analog compute-in-memory (CiM) with nonvolatile memory (NVM) to achieve this goal. The authors introduce AnalogNets model architectures and a comprehensive training methodology to maintain accuracy in the presence of analog nonidealities and low-precision data converters. They also present AON-CiM, a programmable phase-change memory (PCM) analog CiM accelerator, designed to reduce the complexity and cost of interconnects. Evaluation results show promising accuracy and efficiency for KWS and VWW tasks.

IEEE MICRO (2022)

Article Multidisciplinary Sciences

Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators

Malte J. Rasch, Charles Mackin, Manuel Le Gallo, An Chen, Andrea Fasoli, Frederic Odermatt, Ning Li, S. R. Nandakumar, Pritish Narayanan, Hsinyu Tsai, Geoffrey W. Burr, Abu Sebastian, Vijay Narayanan

Summary: In this study, a hardware-aware retraining approach is developed to examine the accuracy of analog in-memory computing across multiple network topologies and investigate sensitivity and robustness to a broad set of nonidealities. By introducing a realistic crossbar model, significant improvement is achieved compared to earlier retraining approaches. The results show that many larger-scale deep neural networks can be successfully retrained to show iso-accuracy with the floating point implementation, and nonidealities that add noise to the inputs or outputs have the largest impact on accuracy.

NATURE COMMUNICATIONS (2023)

Article Engineering, Electrical & Electronic

A 64-core mixed-signal in-memory compute chip based on phase-change memory for deep neural network inference

Manuel Le Gallo, Riduan Khaddam-Aljameh, Milos Stanisavljevic, Athanasios Vasilopoulos, Benedikt Kersting, Martino Dazzi, Geethan Karunaratne, Matthias Brandli, Abhairaj Singh, Silvia M. Mueller, Julian Buchel, Xavier Timoneda, Vinay Joshi, Malte J. Rasch, Urs Egger, Angelo Garofalo, Anastasios Petropoulos, Theodore Antonakopoulos, Kevin Brew, Samuel Choi, Injo Ok, Timothy Philip, Victor Chan, Claire Silvestre, Ishtiaq Ahsan, Nicole Saulnier, Vijay Narayanan, Pier Andrea Francese, Evangelos Eleftheriou, Abu Sebastian

Summary: A multicore analogue in-memory computing chip designed and fabricated in 14 nm complementary metal-oxide-semiconductor technology with backend-integrated phase-change memory is reported. It can be used to reduce the latency and energy consumption of deep neural network inference tasks by performing computations within memory. The chip features interconnection of 64 AIMC cores via an on-chip communication network and implements digital activation functions and additional processing involved in individual convolutional layers and long short-term memory units.

NATURE ELECTRONICS (2023)

Article Engineering, Electrical & Electronic

2022 roadmap on neuromorphic computing and engineering

Dennis Christensen, Regina Dittmann, Bernabe Linares-Barranco, Abu Sebastian, Manuel Le Gallo, Andrea Redaelli, Stefan Slesazeck, Thomas Mikolajick, Sabina Spiga, Stephan Menzel, Ilia Valov, Gianluca Milano, Carlo Ricciardi, Shi-Jun Liang, Feng Miao, Mario Lanza, Tyler J. Quill, Scott T. Keene, Alberto Salleo, Julie Grollier, Danijela Markovic, Alice Mizrahi, Peng Yao, J. Joshua Yang, Giacomo Indiveri, John Paul Strachan, Suman Datta, Elisa Vianello, Alexandre Valentian, Johannes Feldmann, Xuan Li, Wolfram H. P. Pernice, Harish Bhaskaran, Steve Furber, Emre Neftci, Franz Scherr, Wolfgang Maass, Srikanth Ramaswamy, Jonathan Tapson, Priyadarshini Panda, Youngeun Kim, Gouhei Tanaka, Simon Thorpe, Chiara Bartolozzi, Thomas A. Cleland, Christoph Posch, Shihchii Liu, Gabriella Panuccio, Mufti Mahmud, Arnab Neelim Mazumder, Morteza Hosseini, Tinoosh Mohsenin, Elisa Donati, Silvia Tolu, Roberto Galeazzi, Martin Ejsing Christensen, Sune Holm, Daniele Ielmini, N. Pryds

Summary: This article introduces the characteristics and advantages of von Neumann architecture and neuromorphic computing systems. While traditional von Neumann architecture is powerful, it has high power consumption and cannot handle complex data. Neuromorphic computing systems, inspired by biological concepts, can achieve lower power consumption for storing and processing large amounts of digital information. The aim of this article is to provide perspectives on the current state and future challenges in the field of neuromorphic technology, and to provide a concise yet comprehensive introduction and future outlook for readers.

NEUROMORPHIC COMPUTING AND ENGINEERING (2022)

Article Engineering, Electrical & Electronic

Precision of bit slicing with in-memory computing based on analog phase-change memory crossbars

Manuel Le Gallo, S. R. Nandakumar, Lazar Ciric, Irem Boybat, Riduan Khaddam-Aljameh, Charles Mackin, Abu Sebastian

Summary: In-memory computing is an efficient approach that utilizes the physical attributes of memory devices to perform computational tasks. However, the computational accuracy of this approach is currently limited due to inter-device variability, inhomogeneity, and randomness in analog memory devices. Bit slicing, a technique for constructing high precision processors, shows promise in overcoming this limitation. This study assesses the computational error in in-memory matrix-vector multiplications using bit slicing, and emphasizes the need to minimize analog matrix representation error through averaging within a given dynamic range.

NEUROMORPHIC COMPUTING AND ENGINEERING (2022)

Proceedings Paper Engineering, Electrical & Electronic

Accurate weight mapping in a multi-memristive synaptic unit

Michele Martemucci, Benedikt Kersting, Riduan Khaddam-Aljameh, Irem Boybat, S. R. Nandakumar, Urs Egger, Matthew Brightsky, Robert L. Bruce, Manuel Le Gallo, Abu Sebastian

Summary: The proposed weight mapping algorithm efficiently programs a synaptic unit composed of multiple phase change memory devices, showing resilience to device-level non-idealities and yield. The algorithm is experimentally validated on a prototype PCM unit cell fabricated in the 90nm CMOS technology node.

2021 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS) (2021)

Article Mathematical & Computational Biology

Accelerating Inference of Convolutional Neural Networks Using In-memory Computing

Martino Dazzi, Abu Sebastian, Luca Benini, Evangelos Eleftheriou

Summary: In-memory computing (IMC) is a non-von Neumann paradigm that offers energy-efficient, high throughput hardware for deep learning applications. This approach requires a rethink of architectural design choices due to its different execution pattern compared to previous computational paradigms. When applied to Convolution Neural Networks (CNNs), IMC hardware can achieve throughput and latency beyond current state-of-the-art for image classification tasks.

FRONTIERS IN COMPUTATIONAL NEUROSCIENCE (2021)

Proceedings Paper Engineering, Multidisciplinary

Mushroom-type phase change memory with projection liner: an array-level demonstration of conductance drift and noise mitigation

R. L. Bruce, S. Ghazi Sarwat, I Boybat, C-W Cheng, W. Kim, S. R. Nandakumar, C. Mackin, T. Philip, Z. Liu, K. Brew, N. Gong, I Ok, P. Adusumilli, K. Spoon, S. Ambrogio, B. Kersting, T. Bohnstingl, M. Le Gallo, A. Simon, N. Li, I Saraf, J-P Han, L. Gignac, J. M. Papalia, T. Yamashita, N. Saulnier, G. W. Burr, H. Tsai, A. Sebastian, V Narayanan, M. BrightSky

Summary: Phase change memory (PCM) is being considered for non-von Neumann accelerators for deep neural networks based on in-memory computing. Conductance drift and noise are key challenges for reliable storage of synaptic weights in such accelerators. The integration of a projection liner into multilevel mushroom-type PCM devices demonstrates mitigation of conductance drift and noise, with further improvement shown by combining with a low-drift phase-change material. Large-scale experiments confirm lower drift and device-to-device drift variability for devices with projection liner, crucial for in-memory computing accelerators.

2021 IEEE INTERNATIONAL RELIABILITY PHYSICS SYMPOSIUM (IRPS) (2021)

暂无数据