TY - CHAP
T1 - ESSOP
T2 - 52nd IEEE International Symposium on Circuits and Systems, ISCAS 2020
AU - Joshi, Vinay
AU - Karunaratne, Geethan
AU - Le Gallo, Manuel
AU - Boybat, Irem
AU - Piveteau, Christophe
AU - Sebastian, Abu
AU - Rajendran, Bipin
AU - Eleftheriou, Evangelos
N1 - Funding Information:
This project was supported partially by the Semiconductor Research Corporation.
Publisher Copyright:
© 2020 IEEE
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.
PY - 2020
Y1 - 2020
N2 - Deep neural networks (DNNs) have surpassed human-level accuracy in a variety of cognitive tasks but at the cost of significant memory/time requirements in DNN training. This limits their deployment in energy and memory limited applications that require real-time learning. Matrix-vector multiplications (MVM) and vector-vector outer product (VVOP) are the two most expensive operations associated with training of DNNs. Strategies to improve the efficiency of MVM computation in hardware have been demonstrated with minimal impact on training accuracy. However, the VVOP computation remains a relatively less explored bottleneck even with the aforementioned strategies. Stochastic computing (SC) has been proposed to improve the efficiency of VVOP computation but on relatively shallow networks with bounded activation functions and floating-point (FP) scaling of activation gradients. In this paper, we propose ESSOP, an efficient and scalable stochastic outer product architecture based on the SC paradigm. We introduce efficient techniques to generalize SC for weight update computation in DNNs with the unbounded activation functions (e.g., ReLU), required by many state-of-the-art networks. Our architecture reduces the computational cost by re-using random numbers and replacing certain FP multiplication operations by bit shift scaling. We show that the ResNet-32 network with 33 convolution layers and a fully-connected layer can be trained with ESSOP on the CIFAR-10 dataset to achieve baseline comparable accuracy. Hardware design of ESSOP at 14 nm technology node shows that, compared to a highly pipelined FP16 multiplier design, ESSOP is 82.2 % and 93.7 % better in energy and area efficiency respectively for outer product computation.
AB - Deep neural networks (DNNs) have surpassed human-level accuracy in a variety of cognitive tasks but at the cost of significant memory/time requirements in DNN training. This limits their deployment in energy and memory limited applications that require real-time learning. Matrix-vector multiplications (MVM) and vector-vector outer product (VVOP) are the two most expensive operations associated with training of DNNs. Strategies to improve the efficiency of MVM computation in hardware have been demonstrated with minimal impact on training accuracy. However, the VVOP computation remains a relatively less explored bottleneck even with the aforementioned strategies. Stochastic computing (SC) has been proposed to improve the efficiency of VVOP computation but on relatively shallow networks with bounded activation functions and floating-point (FP) scaling of activation gradients. In this paper, we propose ESSOP, an efficient and scalable stochastic outer product architecture based on the SC paradigm. We introduce efficient techniques to generalize SC for weight update computation in DNNs with the unbounded activation functions (e.g., ReLU), required by many state-of-the-art networks. Our architecture reduces the computational cost by re-using random numbers and replacing certain FP multiplication operations by bit shift scaling. We show that the ResNet-32 network with 33 convolution layers and a fully-connected layer can be trained with ESSOP on the CIFAR-10 dataset to achieve baseline comparable accuracy. Hardware design of ESSOP at 14 nm technology node shows that, compared to a highly pipelined FP16 multiplier design, ESSOP is 82.2 % and 93.7 % better in energy and area efficiency respectively for outer product computation.
UR - http://www.scopus.com/inward/record.url?scp=85109315576&partnerID=8YFLogxK
M3 - Conference paper
AN - SCOPUS:85109315576
T3 - Proceedings - IEEE International Symposium on Circuits and Systems
BT - 2020 IEEE International Symposium on Circuits and Systems, ISCAS 2020 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 10 October 2020 through 21 October 2020
ER -