Abstract:
A method of training a neural network with back propagation includes generating error events representing a gradient of a cost function for the neural network. The error events may be generated based on a forward pass through the neural network resulting from input events, weights of the neural network and events from a target signal. The method further includes updating the weights of the neural network based on the error events.
Abstract:
A method for processing temporally redundant data in an artificial neural network (ANN) includes encoding an input signal, received at an initial layer of the ANN, into an encoded signal. The encoded signal comprises the input signal and a rate of change of the input signal. The method also includes quantizing the encoded signal into integer values and computing an activation signal of a neuron in a next layer of the ANN based on the quantized encoded signal. The method further includes computing an activation signal of a neuron at each layer subsequent to the next layer to compute a full forward pass of the ANN. The method also includes back propagating approximated gradients and updating parameters of the ANN based on an approximate derivative of a loss with respect to the activation signal.
Abstract:
A method of computation in a deep neural network includes discretizing input signals and computing a temporal difference of the discrete input signals to produce a discretized temporal difference. The method also includes applying weights of a first layer of the deep neural network to the discretized temporal difference to create an output of a weight matrix. The output of the weight matrix is temporally summed with a previous output of the weight matrix. An activation function is applied to the temporally summed output to create a next input signal to a next layer of the deep neural network.