Abstract:
In one embodiment, a depth-first deep convolutional network (DCN) having a first convolutional layer having a first first-layer kernel and adapted to convolve a first input and a second convolutional layer having a first second-layer kernel and adapted to convolve a second-layer input. A method for the DCN includes initiating convolution in the first convolution layer of the first input tensor with the first first-layer kernel to generate a value strip for the second input tensor and, prior to completion of the convolution in the first convolution layer, initiating convolution in the second convolution layer of the second input with the first second-layer kernel to generate a value strip for a third layer.
Abstract:
convolução em profundidade em redes neurais profundas. em uma modalidade, uma rede de convolução em profundidade (dcn), possuindo uma primeira camada de convolução que possui um primeiro núcleo de primeira camada, e adaptada para convolver uma primeira entrada, e uma segunda camada de convolução possuindo um primeiro núcleo de segunda camada e adaptada para convolver uma entrada de segunda camada. um método para a dcn inclui iniciar a convolução na primeira camada de convolução do primeiro tensor de entrada com o primeiro núcleo de primeira camada para gerar uma tira de valor para o segundo tensor de entrada e, antes da finalização da convolução na primeira camada de convolução, iniciar a convolução na segunda camada de convolução da segunda entrada com o primeiro núcleo de segunda camada para gerar uma tira de valor para uma terceira camada.
Abstract:
Methods and apparatus are provided for training a neural device having an artificial nervous system by modulating at least one training parameter during the training. One example method for training a neural device having an artificial nervous system generally includes observing the neural device in a training environment and modulating at least one training parameter based at least in part on the observing. For example, the training apparatus described herein may modify the neural device's internal learning mechanisms (e.g., spike rate, learning rate, neuromodulators, sensor sensitivity, etc.) and/or the training environment's stimuli (e.g., move a flame closer to the device, make the scene darker, etc.). In this manner, the speed with which the neural device is trained (i.e., the training rate) may be significantly increased compared to conventional neural device training systems.
Abstract:
Data compression systems, methods, and computer program products are disclosed. For each successive input word of an input stream, it is determined whether the input word matches an entry in a lookback table. The lookback table is updated in response to the input word. Input words may be of a number of data types, including zero runs and full or partial matches with an entry in the lookback table. A codeword is generated by entropy encoding a data type corresponding to the input word. The lookback table may be indexed by the position of the input word in the input stream.