Invention Grant
- Patent Title: Accelerate deep neural network in an FPGA
-
Application No.: US15299626Application Date: 2016-10-21
-
Publication No.: US10656962B2Publication Date: 2020-05-19
- Inventor: Yonghua Lin , Jianbin Tang , Junsong Wang
- Applicant: International Business Machines Corporation
- Applicant Address: US NY Armonk
- Assignee: International Business Machines Corporation
- Current Assignee: International Business Machines Corporation
- Current Assignee Address: US NY Armonk
- Agency: Scully, Scott, Murphy & Presser, P.C.
- Agent Joseph Petrokaitis, Esq.
- Main IPC: G06F9/46
- IPC: G06F9/46 ; G06N3/063 ; G06N3/10

Abstract:
A method, system and computer program product for accelerating a deep neural network (DNN) in a field-programmable gate array (FPGA) are disclosed. The method includes receiving a DNN net file and weights, converting the received DNN net file to one or more source files, generating an executable FPGA bit file using the one or more source files, and downloading the executable FPGA bit file from the DNN conversion platform to the FPGA. Converting of the received DNN net file and the weights to the one or more source files can further include analyzing the DNN net file to identify a plurality of neural layers, decomposing one or more neural layers of the plurality of neural layers to one or more operation blocks, instantiating the one or more source files, based on the one or more operation blocks.
Public/Granted literature
- US20180114117A1 ACCELERATE DEEP NEURAL NETWORK IN AN FPGA Public/Granted day:2018-04-26
Information query