Buffer device and convolution operation device and method

    公开(公告)号:US10162799B2

    公开(公告)日:2018-12-25

    申请号:US15459675

    申请日:2017-03-15

    Applicant: Kneron, Inc.

    Abstract: A buffer device includes input lines, an input buffer unit and a remapping unit. The input lines are coupled to a memory and configured to be inputted with data from the memory in a current clock. The input buffer unit is coupled to the input lines and configured to buffer one part of the inputted data and output the part of the inputted data in a later clock. The remapping unit is coupled to the input lines and the input buffer unit, and configured to generate remap data for a convolution operation according to the data on the input lines and the output of the input buffer unit in the current clock. A convolution operation method for a data stream is also disclosed.

    Convolution operation device and convolution operation method

    公开(公告)号:US10936937B2

    公开(公告)日:2021-03-02

    申请号:US15801623

    申请日:2017-11-02

    Applicant: Kneron, Inc.

    Abstract: A convolution operation device includes a convolution calculation module, a memory and a buffer device. The convolution calculation module has a plurality of convolution units, and each convolution unit performs a convolution operation according to a filter and a plurality of current data, and leaves a part of the current data after the convolution operation. The buffer device is coupled to the memory and the convolution calculation module for retrieving a plurality of new data from the memory and inputting the new data to each of the convolution units. The new data are not a duplicate of the current data. A convolution operation method is also disclosed.

    Method of compressing convolution parameters, convolution operation chip and system

    公开(公告)号:US10516415B2

    公开(公告)日:2019-12-24

    申请号:US15893294

    申请日:2018-02-09

    Applicant: Kneron, Inc.

    Abstract: A method for compressing multiple original convolution parameters into a convolution operation chip includes steps of: determining a range of the original convolution parameters; setting an effective bit number for the range; setting a representative value, wherein the representative value is within the range; calculating differential values between the original convolution parameters and the representative value; quantifying the differential values to a minimum effective bit to obtain a plurality of compressed convolution parameters; and transmitting the effective bit number, the representative value and the compressed convolution parameters to the convolution operation chip.

    Pooling operation device and method for convolutional neural network

    公开(公告)号:US10943166B2

    公开(公告)日:2021-03-09

    申请号:US15802092

    申请日:2017-11-02

    Applicant: Kneron, Inc.

    Abstract: A pooling operation method for a convolutional neural network includes the following steps of: reading multiple new data in at least one current column of a pooling window; performing a first pooling operation with the new data to generate at least a current column pooling result; storing the current column pooling result in a buffer; and performing a second pooling operation with the current column pooling result and at least a preceding column pooling result stored in the buffer to generate a pooling result of the pooling window. The first pooling operation and the second pooling operation are forward max pooling operations.

    Multi-layer neural network
    7.
    发明授权

    公开(公告)号:US10552732B2

    公开(公告)日:2020-02-04

    申请号:US15242610

    申请日:2016-08-22

    Applicant: Kneron Inc.

    Abstract: A multi-layer artificial neural network having at least one high-speed communication interface and N computational layers is provided. N is an integer larger than 1. The N computational layers are serially connected via the at least one high-speed communication interface. Each of the N computational layers respectively includes a computation circuit and a local memory. The local memory is configured to store input data and learnable parameters for the computation circuit. The computation circuit in the ith computational layer provides its computation results, via the at least one high-speed communication interface, to the local memory in the (i+1)th computational layer as the input data for the computation circuit in the (i+1)th computational layer, wherein i is an integer index ranging from 1 to (N−1).

    MULTI-LAYER NEURAL NETWORK
    8.
    发明申请

    公开(公告)号:US20180053084A1

    公开(公告)日:2018-02-22

    申请号:US15242610

    申请日:2016-08-22

    Applicant: Kneron Inc.

    CPC classification number: G06N3/063 G06N3/0454

    Abstract: A multi-layer artificial neural network having at least one high-speed communication interface and N computational layers is provided. N is an integer larger than 1. The N computational layers are serially connected via the at least one high-speed communication interface. Each of the N computational layers respectively includes a computation circuit and a local memory. The local memory is configured to store input data and learnable parameters for the computation circuit. The computation circuit in the ith computational layer provides its computation results, via the at least one high-speed communication interface, to the local memory in the (i+1)th computational layer as the input data for the computation circuit in the (i+1)th computational layer, wherein i is an integer index ranging from 1 to (N−1).

Patent Agency Ranking