-
1.
公开(公告)号:EP3234906A4
公开(公告)日:2018-06-13
申请号:EP15870522
申请日:2015-10-21
Applicant: INTEL CORP
Inventor: RAO JAYANTH N , LANKA PAVAN K
CPC classification number: G06T1/20 , G06F9/46 , G06F9/5038 , G06T2200/28 , H03K99/00
Abstract: A mechanism is described for facilitating dynamic pipelining of workload executions at graphics processing units on computing devices. A method of embodiments, as described herein, includes generating a command buffer having a plurality of kernels relating to a plurality of workloads to be executed at a graphics processing unit (GPU), and pipelining the workloads to be processed at the GPU, where pipelining includes scheduling each kernel to be executed on the GPU based on at least one of availability of resource threads and status of one or more dependency events relating to each kernel in relation to other kernels of the plurality of kernels.
-
2.
公开(公告)号:EP3198551A4
公开(公告)日:2018-03-28
申请号:EP15843173
申请日:2015-09-10
Applicant: INTEL CORP
Inventor: RAO JAYANTH N , LANKA PAVAN K , MROZEK MICHAL
Abstract: An apparatus and method are described for executing workloads without host intervention. For example, one embodiment of an apparatus comprises: a host processor; and a graphics processor unit (GPU) to execute a hierarchical workload responsive to one or more commands issued by the host processor, the hierarchical workload comprising a parent workload and a plurality of child workloads interconnected in a logical graph structure; and a scheduler kernel implemented by the GPU to schedule execution of the plurality of child workloads without host intervention, the scheduler kernel to evaluate conditions required for execution of the child workloads and determine an order in which to execute the child workloads on the GPU based on the evaluated conditions; the GPU to execute the child workloads in the order determined by the scheduler kernel and to provide results of parent and child workloads to the host processor following execution of all of the child workloads.
-