-
公开(公告)号:US10796443B2
公开(公告)日:2020-10-06
申请号:US16162909
申请日:2018-10-17
Applicant: Kneron, Inc.
Inventor: Ming-Zhe Jiang , Yuan Du , Li Du , Jie Wu , Jun-Jie Su
Abstract: An image depth decoder includes an NIR image buffer, a reference image ring buffer and a pattern matching engine. The NIR image buffer stores an NIR image inputted by a stream. The reference image ring buffer stores a reference image inputted by a stream. The pattern matching engine is coupled to the NIR image buffer and the reference image ring buffer, and performs a depth computation according to the NIR image and the reference image to output at least one depth value.
-
公开(公告)号:US10162799B2
公开(公告)日:2018-12-25
申请号:US15459675
申请日:2017-03-15
Applicant: Kneron, Inc.
Inventor: Yuan Du , Li Du , Yi-Lei Li , Yen-Cheng Kuan , Chun-Chen Liu
Abstract: A buffer device includes input lines, an input buffer unit and a remapping unit. The input lines are coupled to a memory and configured to be inputted with data from the memory in a current clock. The input buffer unit is coupled to the input lines and configured to buffer one part of the inputted data and output the part of the inputted data in a later clock. The remapping unit is coupled to the input lines and the input buffer unit, and configured to generate remap data for a convolution operation according to the data on the input lines and the output of the input buffer unit in the current clock. A convolution operation method for a data stream is also disclosed.
-
公开(公告)号:US10943166B2
公开(公告)日:2021-03-09
申请号:US15802092
申请日:2017-11-02
Applicant: Kneron, Inc.
Inventor: Yuan Du , Li Du , Chun-Chen Liu
Abstract: A pooling operation method for a convolutional neural network includes the following steps of: reading multiple new data in at least one current column of a pooling window; performing a first pooling operation with the new data to generate at least a current column pooling result; storing the current column pooling result in a buffer; and performing a second pooling operation with the current column pooling result and at least a preceding column pooling result stored in the buffer to generate a pooling result of the pooling window. The first pooling operation and the second pooling operation are forward max pooling operations.
-
公开(公告)号:US10169295B2
公开(公告)日:2019-01-01
申请号:US15459737
申请日:2017-03-15
Applicant: Kneron, Inc.
Inventor: Li Du , Yuan Du , Yi-Lei Li , Yen-Cheng Kuan , Chun-Chen Liu
Abstract: A convolution operation method includes the following steps of: performing convolution operations for data inputted in channels, respectively, so as to output a plurality of convolution results; and alternately summing the convolution results of the channels in order so as to output a sum result. A convolution operation device executing the convolution operation method is also disclosed.
-
公开(公告)号:US10936937B2
公开(公告)日:2021-03-02
申请号:US15801623
申请日:2017-11-02
Applicant: Kneron, Inc.
Inventor: Li Du , Yuan Du , Chun-Chen Liu
Abstract: A convolution operation device includes a convolution calculation module, a memory and a buffer device. The convolution calculation module has a plurality of convolution units, and each convolution unit performs a convolution operation according to a filter and a plurality of current data, and leaves a part of the current data after the convolution operation. The buffer device is coupled to the memory and the convolution calculation module for retrieving a plurality of new data from the memory and inputting the new data to each of the convolution units. The new data are not a duplicate of the current data. A convolution operation method is also disclosed.
-
公开(公告)号:US10516415B2
公开(公告)日:2019-12-24
申请号:US15893294
申请日:2018-02-09
Applicant: Kneron, Inc.
Inventor: Li Du , Yuan Du , Jun-Jie Su , Ming-Zhe Jiang
Abstract: A method for compressing multiple original convolution parameters into a convolution operation chip includes steps of: determining a range of the original convolution parameters; setting an effective bit number for the range; setting a representative value, wherein the representative value is within the range; calculating differential values between the original convolution parameters and the representative value; quantifying the differential values to a minimum effective bit to obtain a plurality of compressed convolution parameters; and transmitting the effective bit number, the representative value and the compressed convolution parameters to the convolution operation chip.
-
公开(公告)号:US10552732B2
公开(公告)日:2020-02-04
申请号:US15242610
申请日:2016-08-22
Applicant: Kneron Inc.
Inventor: Yilei Li , Yuan Du , Chun-Chen Liu , Li Du
IPC: G06N3/063
Abstract: A multi-layer artificial neural network having at least one high-speed communication interface and N computational layers is provided. N is an integer larger than 1. The N computational layers are serially connected via the at least one high-speed communication interface. Each of the N computational layers respectively includes a computation circuit and a local memory. The local memory is configured to store input data and learnable parameters for the computation circuit. The computation circuit in the ith computational layer provides its computation results, via the at least one high-speed communication interface, to the local memory in the (i+1)th computational layer as the input data for the computation circuit in the (i+1)th computational layer, wherein i is an integer index ranging from 1 to (N−1).
-
公开(公告)号:US20180053084A1
公开(公告)日:2018-02-22
申请号:US15242610
申请日:2016-08-22
Applicant: Kneron Inc.
Inventor: Yilei Li , Yuan Du , Chun-Chen Liu , Li Du
CPC classification number: G06N3/063 , G06N3/0454
Abstract: A multi-layer artificial neural network having at least one high-speed communication interface and N computational layers is provided. N is an integer larger than 1. The N computational layers are serially connected via the at least one high-speed communication interface. Each of the N computational layers respectively includes a computation circuit and a local memory. The local memory is configured to store input data and learnable parameters for the computation circuit. The computation circuit in the ith computational layer provides its computation results, via the at least one high-speed communication interface, to the local memory in the (i+1)th computational layer as the input data for the computation circuit in the (i+1)th computational layer, wherein i is an integer index ranging from 1 to (N−1).
-
-
-
-
-
-
-