Error allocation format selection for hardware implementation of deep neural network
Abstract:
Methods for determining a fixed point format for one or more layers of a DNN based on the portion of the output error of the DNN attributed to the fixed point formats of the different layers. Specifically, in the methods described herein the output error of a DNN attributable to the quantisation of the weights or input data values of each layer is determined using a Taylor approximation and the fixed point number format of one or more layers is adjusted based on the attribution. For example, where the fixed point number formats used by a DNN comprises an exponent and a mantissa bit length, the mantissa bit length of the layer allocated the lowest portion of the output error may be reduced, or the mantissa bit length of the layer allocated the highest portion of the output error may be increased. Such a method may be iteratively repeated to determine an optimum set of fixed point number formats for the layers of a DNN.
Information query
Patent Agency Ranking
0/0