W-AMA: Weight-aware Approximate Multiplication Architecture for neural processing
Created by W.Langdon from
gp-bibliography.bib Revision:1.8098
- @Article{LIU:2023:compeleceng,
-
author = "Bo Liu2 and Renyuan Zhang and Qiao Shen and
Zeju Li and Na Xie and Yuanhao Wang and Chonghang Xie and
Hao Cai",
-
title = "{W-AMA:} Weight-aware Approximate Multiplication
Architecture for neural processing",
-
journal = "Computers and Electrical Engineering",
-
volume = "111",
-
pages = "108921",
-
year = "2023",
-
ISSN = "0045-7906",
-
DOI = "doi:10.1016/j.compeleceng.2023.108921",
-
URL = "https://www.sciencedirect.com/science/article/pii/S0045790623003452",
-
keywords = "genetic algorithms, genetic programming, Cartesian
Genetic Programming, Approximate computing, Deep Neural
Network, ANN, Hardware accelerator, Average Hessian
trace",
-
abstract = "This paper presents the Weight-aware Approximate
Multiplication Architecture (W-AMA) for Deep Neural
Networks (DNNs). Considering the Gaussian-like weight
distribution, it deploys an accuracy-configurable
computing component to improve the computational
efficiency. Two techniques for effectively integrating
the W-AMA into DNN accelerator are presented: (1) A
Cartesian Genetic Programming (CGP) based approximate
multiplier is designed and selectable to compute the
Least Significant Bit (LSB) for a higher accuracy mode.
The Reward-Penalty-Coefficient (RPC) is proposed to
achieve the internal-compensation. (2) The
Hessian-Aware-Approximation (HAA) method is used for
hybrid approximate modes cross-layer mapping. Based on
the W-AMA, an energy-efficient DNN accelerator is
proposed and evaluated on 28 nm technology. It can
achieve the energy efficiency of 9.6 TOPS/W, and the
computational energy efficiency can be improved by
1.5times compared with the standard units, with an
0.52percent accuracy loss on CIFAR-10 using ResNet-18",
- }
Genetic Programming entries for
Bo Liu2
RenYuan Zhang
Qiao Shen
Zeju Li
Na Xie
Yuanhao Wang
Chonghang Xie
Hao Cai
Citations