Abstract:
In an embodiment, a model is sliced into a plurality of slices. A slice in the plurality of slices is selected. A portion of code, that corresponds to the selected slice, is identified from code generated from the model. The identified code is verified to be equivalent to the selected slice. Equivalence may include equivalent functionality, equivalent data types, equivalent performance, and/or other forms of equivalence between the selected slice and the identified generated code.
Abstract:
Systems and methods generate code from a source program where the generated code may be compiled and executed on a Graphics Processing Unit (GPU). A parallel loop analysis check may be performed on regions of the source program identified for parallelization. One or more optimizations also may be applied to the source program that convert mathematical operations into a parallel form. The source program may be partitioned into segments for execution on a host and a device. Kernels may be created for the segments to be executed on the device. The size of the kernels may be determined, and memory transfers between the host and device may be optimized.
Abstract:
Systems and methods may automatically generate code for deep learning networks. The systems methods may provide a code generation framework for generating target specific code. The code generation framework may include one or more predefined class hierarchies for constructing objects of the generated code. The objects of the class hierarchies may provide an interface to predefined libraries of deep learning functions optimized for use on a target platform. The systems and methods may perform one or more optimizations on the code being generated.
Abstract:
In an embodiment, a model is sliced into a plurality of slices. A slice in the plurality of slices is selected. A portion of code, that corresponds to the selected slice, is identified from code generated from the model. The identified code is verified to be equivalent to the selected slice. Equivalence may include equivalent functionality, equivalent data types, equivalent performance, and or other forms of equivalence between the selected slice and the identified generated code.
Abstract:
Systems and methods quantize an application having a trained Deep Neural Network (DNN) for deployment on target hardware. The application may be instrumented to observe data values generated during execution of the application. Statistics may be generated for the observed data values and presented in a visualization tool. The application may be quantized through a rules based approach. The quantization may be based on the statistics and on constraints imposed by resources available at the target hardware. The systems and methods may present the proposed data types resulting from the quantization and may create a quantized version of the application incorporating the proposed data types. The systems and methods may generate performance data to validate the quantized version of the application. Changes to the rules may be made and the quantization process repeated if the performance is not satisfactory.
Abstract:
Systems and methods may automatically generate code for deep learning networks. The systems methods may provide a code generation framework for generating target specific code. The code generation framework may include one or more predefined class hierarchies for constructing objects of the generated code. The objects of the class hierarchies may provide an interface to predefined libraries of deep learning functions optimized for use on a target platform. The systems and methods may perform one or more optimizations on the code being generated.
Abstract:
Systems and methods generate code from a source program where the generated code may be compiled and executed on a Graphics Processing Unit (GPU). A parallel loop analysis check may be performed on regions of the source program identified for parallelization. One or more optimizations also may be applied to the source program that convert mathematical operations into a parallel form. The source program may be partitioned into segments for execution on a host and a device. Kernels may be created for the segments to be executed on the device. The size of the kernels may be determined, and memory transfers between the host and device may be optimized.