Accelerate deep neural network in an FPGA
Abstract:
A method, system and computer program product for accelerating a deep neural network (DNN) in a field-programmable gate array (FPGA) are disclosed. The method includes receiving a DNN net file and weights, converting the received DNN net file to one or more source files, generating an executable FPGA bit file using the one or more source files, and downloading the executable FPGA bit file from the DNN conversion platform to the FPGA. Converting of the received DNN net file and the weights to the one or more source files can further include analyzing the DNN net file to identify a plurality of neural layers, decomposing one or more neural layers of the plurality of neural layers to one or more operation blocks, instantiating the one or more source files, based on the one or more operation blocks.
Public/Granted literature
Information query
Patent Agency Ranking
0/0