Invention Grant
US08700552B2 Exploiting sparseness in training deep neural networks 有权
在深层神经网络训练中利用稀疏性

Exploiting sparseness in training deep neural networks
Abstract:
Deep Neural Network (DNN) training technique embodiments are presented that train a DNN while exploiting the sparseness of non-zero hidden layer interconnection weight values. Generally, a fully connected DNN is initially trained by sweeping through a full training set a number of times. Then, for the most part, only the interconnections whose weight magnitudes exceed a minimum weight threshold are considered in further training. This minimum weight threshold can be established as a value that results in only a prescribed maximum number of interconnections being considered when setting interconnection weight values via an error back-propagation procedure during the training. It is noted that the continued DNN training tends to converge much faster than the initial training.
Public/Granted literature
Information query
Patent Agency Ranking
0/0