-
公开(公告)号:US20190138567A1
公开(公告)日:2019-05-09
申请号:US16179270
申请日:2018-11-02
Applicant: Imagination Technologies Limited
Inventor: Chris Martin , David Hough , Clifford Gibson , Daniel Barnard
Abstract: Hardware implementations of, and methods for processing, a convolution layer of a DNN that comprise a plurality of convolution engines wherein the input data and weights are provided to the convolution engines in an order that allows input data and weights read from memory to be used in at least two filter-window calculations performed either by the same convolution engine in successive cycles or by different convolution engines in the same cycle. For example, in some hardware implementations of a convolution layer the convolution engines are configured to process the same weights but different input data each cycle, but the input data for each convolution engine remains the same for at least two cycles so that the convolution engines use the same input data in at least two consecutive cycles.
-
公开(公告)号:US20190087718A1
公开(公告)日:2019-03-21
申请号:US16136553
申请日:2018-09-20
Applicant: Imagination Technologies Limited
Inventor: Chris Martin , David Hough , Paul Brasnett , Cagatay Dikici , James Imber , Clifford Gibson
Abstract: Hardware implementations of DNNs and related methods with a variable output data format. Specifically, in the hardware implementations and methods described herein the hardware implementation is configured to perform one or more hardware passes to implement a DNN wherein during each hardware pass the hardware implementation receives input data for a particular layer, processes that input data in accordance with the particular layer (and optionally one or more subsequent layers), and outputs the processed data in a desired format based on the layer, or layers, that are processed in the particular hardware pass. In particular, when a hardware implementation receives input data to be processed, the hardware implementation also receives information indicating the desired format for the output data of the hardware pass and the hardware implementation is configured to, prior to outputting the processed data convert the output data to the desired format.
-
公开(公告)号:US12165045B2
公开(公告)日:2024-12-10
申请号:US16136553
申请日:2018-09-20
Applicant: Imagination Technologies Limited
Inventor: Chris Martin , David Hough , Paul Brasnett , Cagatay Dikici , James Imber , Clifford Gibson
Abstract: Hardware implementations of DNNs and related methods with a variable output data format. Specifically, in the hardware implementations and methods described herein the hardware implementation is configured to perform one or more hardware passes to implement a DNN wherein during each hardware pass the hardware implementation receives input data for a particular layer, processes that input data in accordance with the particular layer (and optionally one or more subsequent layers), and outputs the processed data in a desired format based on the layer, or layers, that are processed in the particular hardware pass. In particular, when a hardware implementation receives input data to be processed, the hardware implementation also receives information indicating the desired format for the output data of the hardware pass and the hardware implementation is configured to, prior to outputting the processed data convert the output data to the desired format.
-
公开(公告)号:US11868426B2
公开(公告)日:2024-01-09
申请号:US17510633
申请日:2021-10-26
Applicant: Imagination Technologies Limited
Inventor: Chris Martin , David Hough , Clifford Gibson , Daniel Barnard
CPC classification number: G06F17/153 , G06F7/5443 , G06N3/04 , G06N3/063 , G06N3/045
Abstract: Hardware implementations of, and methods for processing, a convolution layer of a DNN that comprise a plurality of convolution engines wherein the input data and weights are provided to the convolution engines in an order that allows input data and weights read from memory to be used in at least two filter-window calculations performed either by the same convolution engine in successive cycles or by different convolution engines in the same cycle. For example, in some hardware implementations of a convolution layer the convolution engines are configured to process the same weights but different input data each cycle, but the input data for each convolution engine remains the same for at least two cycles so that the convolution engines use the same input data in at least two consecutive cycles.
-
公开(公告)号:US11157592B2
公开(公告)日:2021-10-26
申请号:US17165014
申请日:2021-02-02
Applicant: Imagination Technologies Limited
Inventor: Chris Martin , David Hough , Clifford Gibson , Daniel Barnard
Abstract: Hardware implementations of, and methods for processing, a convolution layer of a DNN that comprise a plurality of convolution engines wherein the input data and weights are provided to the convolution engines in an order that allows input data and weights read from memory to be used in at least two filter-window calculations performed either by the same convolution engine in successive cycles or by different convolution engines in the same cycle. For example, in some hardware implementations of a convolution layer the convolution engines are configured to process the same weights but different input data each cycle, but the input data for each convolution engine remains the same for at least two cycles so that the convolution engines use the same input data in at least two consecutive cycles.
-
公开(公告)号:US20240412056A1
公开(公告)日:2024-12-12
申请号:US18813214
申请日:2024-08-23
Applicant: Imagination Technologies Limited
Inventor: Chris Martin , David Hough , Paul Brasnett , Cagatay Dikici , James Imber , Clifford Gibson
Abstract: Hardware implementations of Deep Neural Networks (DNNs) and related methods with a variable output data format. Specifically, in the hardware implementations and methods described herein the hardware implementation is configured to perform one or more hardware passes to implement a DNN wherein during each hardware pass the hardware implementation receives input data for a particular layer, processes that input data in accordance with the particular layer (and optionally one or more subsequent layers), and outputs the processed data in a desired format based on the layer, or layers, that are processed in the particular hardware pass. In particular, when a hardware implementation receives input data to be processed, the hardware implementation also receives information indicating the desired format for the output data of the hardware pass and the hardware implementation is configured to, prior to outputting the processed data convert the output data to the desired format.
-
公开(公告)号:US20210157876A1
公开(公告)日:2021-05-27
申请号:US17165014
申请日:2021-02-02
Applicant: Imagination Technologies Limited
Inventor: Chris Martin , David Hough , Clifford Gibson , Daniel Barnard
Abstract: Hardware implementations of, and methods for processing, a convolution layer of a DNN that comprise a plurality of convolution engines wherein the input data and weights are provided to the convolution engines in an order that allows input data and weights read from memory to be used in at least two filter-window calculations performed either by the same convolution engine in successive cycles or by different convolution engines in the same cycle. For example, in some hardware implementations of a convolution layer the convolution engines are configured to process the same weights but different input data each cycle, but the input data for each convolution engine remains the same for at least two cycles so that the convolution engines use the same input data in at least two consecutive cycles.
-
公开(公告)号:US10942986B2
公开(公告)日:2021-03-09
申请号:US16179270
申请日:2018-11-02
Applicant: Imagination Technologies Limited
Inventor: Chris Martin , David Hough , Clifford Gibson , Daniel Barnard
Abstract: Hardware implementations of, and methods for processing, a convolution layer of a DNN that comprise a plurality of convolution engines wherein the input data and weights are provided to the convolution engines in an order that allows input data and weights read from memory to be used in at least two filter-window calculations performed either by the same convolution engine in successive cycles or by different convolution engines in the same cycle. For example, in some hardware implementations of a convolution layer the convolution engines are configured to process the same weights but different input data each cycle, but the input data for each convolution engine remains the same for at least two cycles so that the convolution engines use the same input data in at least two consecutive cycles.
-
-
-
-
-
-
-