Gradient-based training engine for quaternion-based machine-learning systems
Abstract:
A deep neural network (DNN) includes hidden layers arranged along a forward propagation path between an input layer and an output layer. The input layer accepts training data comprising quaternion values, outputs a quaternion-valued signal along the forward path to at least one of the hidden layers. At least some of the hidden layers include quaternion layers to execute consistent quaternion (QT) forward operations based on one or more variable parameters. A loss function engine produces a loss function representing an error between the DNN result and an expected result. QT backpropagation-based training operations include computing layer-wise QT partial derivatives, consistent with an orthogonal basis of quaternion space, of the loss function with respect to a QT conjugate of the one or more variable parameters and of respective inputs to the quaternion layers.
Information query
Patent Agency Ranking
0/0