King's College London

Research portal

Mixed-Precision Deep Learning Based on Computational Memory

Research output: Contribution to journalArticle

Standard

Mixed-Precision Deep Learning Based on Computational Memory. / Nandakumar, S. R.; Le Gallo, Manuel; Piveteau, Christophe; Joshi, Vinay; Mariani, Giovanni; Boybat, Irem; Karunaratne, Geethan; Khaddam-Aljameh, Riduan; Egger, Urs; Petropoulos, Anastasios; Antonakopoulos, Theodore; Rajendran, Bipin; Sebastian, Abu; Eleftheriou, Evangelos.

In: Frontiers in Neuroscience, Vol. 14, 406, 12.05.2020.

Research output: Contribution to journalArticle

Harvard

Nandakumar, SR, Le Gallo, M, Piveteau, C, Joshi, V, Mariani, G, Boybat, I, Karunaratne, G, Khaddam-Aljameh, R, Egger, U, Petropoulos, A, Antonakopoulos, T, Rajendran, B, Sebastian, A & Eleftheriou, E 2020, 'Mixed-Precision Deep Learning Based on Computational Memory', Frontiers in Neuroscience, vol. 14, 406. https://doi.org/10.3389/fnins.2020.00406

APA

Nandakumar, S. R., Le Gallo, M., Piveteau, C., Joshi, V., Mariani, G., Boybat, I., Karunaratne, G., Khaddam-Aljameh, R., Egger, U., Petropoulos, A., Antonakopoulos, T., Rajendran, B., Sebastian, A., & Eleftheriou, E. (2020). Mixed-Precision Deep Learning Based on Computational Memory. Frontiers in Neuroscience, 14, [406]. https://doi.org/10.3389/fnins.2020.00406

Vancouver

Nandakumar SR, Le Gallo M, Piveteau C, Joshi V, Mariani G, Boybat I et al. Mixed-Precision Deep Learning Based on Computational Memory. Frontiers in Neuroscience. 2020 May 12;14. 406. https://doi.org/10.3389/fnins.2020.00406

Author

Nandakumar, S. R. ; Le Gallo, Manuel ; Piveteau, Christophe ; Joshi, Vinay ; Mariani, Giovanni ; Boybat, Irem ; Karunaratne, Geethan ; Khaddam-Aljameh, Riduan ; Egger, Urs ; Petropoulos, Anastasios ; Antonakopoulos, Theodore ; Rajendran, Bipin ; Sebastian, Abu ; Eleftheriou, Evangelos. / Mixed-Precision Deep Learning Based on Computational Memory. In: Frontiers in Neuroscience. 2020 ; Vol. 14.

Bibtex Download

@article{2d9f6293255648a48a592b2995fb78aa,
title = "Mixed-Precision Deep Learning Based on Computational Memory",
abstract = "Deep neural networks (DNNs) have revolutionized the field of artificial intelligence and have achieved unprecedented success in cognitive tasks such as image and speech recognition. Training of large DNNs, however, is computationally intensive and this has motivated the search for novel computing architectures targeting this application. A computational memory unit with nanoscale resistive memory devices organized in crossbar arrays could store the synaptic weights in their conductance states and perform the expensive weighted summations in place in a non-von Neumann manner. However, updating the conductance states in a reliable manner during the weight update process is a fundamental challenge that limits the training accuracy of such an implementation. Here, we propose a mixed-precision architecture that combines a computational memory unit performing the weighted summations and imprecise conductance updates with a digital processing unit that accumulates the weight updates in high precision. A combined hardware/software training experiment of a multilayer perceptron based on the proposed architecture using a phase-change memory (PCM) array achieves 97.73% test accuracy on the task of classifying handwritten digits (based on the MNIST dataset), within 0.6% of the software baseline. The architecture is further evaluated using accurate behavioral models of PCM on a wide class of networks, namely convolutional neural networks, long-short-term-memory networks, and generative-adversarial networks. Accuracies comparable to those of floating-point implementations are achieved without being constrained by the non-idealities associated with the PCM devices. A system-level study demonstrates 172 × improvement in energy efficiency of the architecture when used for training a multilayer perceptron compared with a dedicated fully digital 32-bit implementation.",
keywords = "deep learning, in-memory computing, memristive devices, mixed-signal design, phase-change memory",
author = "Nandakumar, {S. R.} and {Le Gallo}, Manuel and Christophe Piveteau and Vinay Joshi and Giovanni Mariani and Irem Boybat and Geethan Karunaratne and Riduan Khaddam-Aljameh and Urs Egger and Anastasios Petropoulos and Theodore Antonakopoulos and Bipin Rajendran and Abu Sebastian and Evangelos Eleftheriou",
year = "2020",
month = may,
day = "12",
doi = "10.3389/fnins.2020.00406",
language = "English",
volume = "14",
journal = "Frontiers in Neuroscience",
issn = "1662-453X",
publisher = "Frontiers Media S.A.",

}

RIS (suitable for import to EndNote) Download

TY - JOUR

T1 - Mixed-Precision Deep Learning Based on Computational Memory

AU - Nandakumar, S. R.

AU - Le Gallo, Manuel

AU - Piveteau, Christophe

AU - Joshi, Vinay

AU - Mariani, Giovanni

AU - Boybat, Irem

AU - Karunaratne, Geethan

AU - Khaddam-Aljameh, Riduan

AU - Egger, Urs

AU - Petropoulos, Anastasios

AU - Antonakopoulos, Theodore

AU - Rajendran, Bipin

AU - Sebastian, Abu

AU - Eleftheriou, Evangelos

PY - 2020/5/12

Y1 - 2020/5/12

N2 - Deep neural networks (DNNs) have revolutionized the field of artificial intelligence and have achieved unprecedented success in cognitive tasks such as image and speech recognition. Training of large DNNs, however, is computationally intensive and this has motivated the search for novel computing architectures targeting this application. A computational memory unit with nanoscale resistive memory devices organized in crossbar arrays could store the synaptic weights in their conductance states and perform the expensive weighted summations in place in a non-von Neumann manner. However, updating the conductance states in a reliable manner during the weight update process is a fundamental challenge that limits the training accuracy of such an implementation. Here, we propose a mixed-precision architecture that combines a computational memory unit performing the weighted summations and imprecise conductance updates with a digital processing unit that accumulates the weight updates in high precision. A combined hardware/software training experiment of a multilayer perceptron based on the proposed architecture using a phase-change memory (PCM) array achieves 97.73% test accuracy on the task of classifying handwritten digits (based on the MNIST dataset), within 0.6% of the software baseline. The architecture is further evaluated using accurate behavioral models of PCM on a wide class of networks, namely convolutional neural networks, long-short-term-memory networks, and generative-adversarial networks. Accuracies comparable to those of floating-point implementations are achieved without being constrained by the non-idealities associated with the PCM devices. A system-level study demonstrates 172 × improvement in energy efficiency of the architecture when used for training a multilayer perceptron compared with a dedicated fully digital 32-bit implementation.

AB - Deep neural networks (DNNs) have revolutionized the field of artificial intelligence and have achieved unprecedented success in cognitive tasks such as image and speech recognition. Training of large DNNs, however, is computationally intensive and this has motivated the search for novel computing architectures targeting this application. A computational memory unit with nanoscale resistive memory devices organized in crossbar arrays could store the synaptic weights in their conductance states and perform the expensive weighted summations in place in a non-von Neumann manner. However, updating the conductance states in a reliable manner during the weight update process is a fundamental challenge that limits the training accuracy of such an implementation. Here, we propose a mixed-precision architecture that combines a computational memory unit performing the weighted summations and imprecise conductance updates with a digital processing unit that accumulates the weight updates in high precision. A combined hardware/software training experiment of a multilayer perceptron based on the proposed architecture using a phase-change memory (PCM) array achieves 97.73% test accuracy on the task of classifying handwritten digits (based on the MNIST dataset), within 0.6% of the software baseline. The architecture is further evaluated using accurate behavioral models of PCM on a wide class of networks, namely convolutional neural networks, long-short-term-memory networks, and generative-adversarial networks. Accuracies comparable to those of floating-point implementations are achieved without being constrained by the non-idealities associated with the PCM devices. A system-level study demonstrates 172 × improvement in energy efficiency of the architecture when used for training a multilayer perceptron compared with a dedicated fully digital 32-bit implementation.

KW - deep learning

KW - in-memory computing

KW - memristive devices

KW - mixed-signal design

KW - phase-change memory

UR - http://www.scopus.com/inward/record.url?scp=85085060927&partnerID=8YFLogxK

U2 - 10.3389/fnins.2020.00406

DO - 10.3389/fnins.2020.00406

M3 - Article

AN - SCOPUS:85085060927

VL - 14

JO - Frontiers in Neuroscience

JF - Frontiers in Neuroscience

SN - 1662-453X

M1 - 406

ER -

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454