-
公开(公告)号:US20240177264A1
公开(公告)日:2024-05-30
申请号:US18536581
申请日:2023-12-12
Applicant: Intel Corporation
Inventor: Joydeep RAY , Abhishek R. APPU , Altug KOKER , Balaji VEMBU
IPC: G06T1/20 , G06F12/0811 , G06F12/0815 , G06F12/0831 , G06F12/0875 , G06F12/0888 , G06T1/60
CPC classification number: G06T1/20 , G06F12/0811 , G06F12/0815 , G06F12/0831 , G06F12/0875 , G06F12/0888 , G06T1/60 , G06F2212/1024 , G06F2212/302 , G06F2212/455 , G06F2212/621
Abstract: An apparatus and method are described for managing data which is biased towards a processor or a GPU. For example, an apparatus comprises a processor comprising one or more cores, one or more cache levels, and cache coherence controllers to maintain coherent data in the one or more cache levels; a graphics processing unit (GPU) to execute graphics instructions and process graphics data, wherein the GPU and processor cores are to share a virtual address space for accessing a system memory; a GPU memory addressable through the virtual address space shared by the processor cores and GPU; and bias management circuitry to store an indication for whether the data has a processor bias or a GPU bias, wherein if the data has a GPU bias, the data is to be accessed by the GPU without necessarily accessing the processor's cache coherence controllers.
-
公开(公告)号:US20220405877A1
公开(公告)日:2022-12-22
申请号:US17828411
申请日:2022-05-31
Applicant: INTEL CORPORATION
Inventor: Abhishek R. APPU , Joydeep RAY , Altug KOKER , Balaji VEMBU , Pattabhiraman K , Matthew B. CALLAWAY
Abstract: An apparatus and method for dynamic provisioning, quality of service, and prioritization in a graphics processor. For example, one embodiment of an apparatus comprises a graphics processing unit (GPU) comprising a plurality of graphics processing resources; slice configuration hardware logic to logically subdivide the graphics processing resources into a plurality of slices; and slice allocation hardware logic to allocate a designated number of slices to each virtual machine (VM) of a plurality of VMs running in a virtualized execution environment, the slice allocation hardware logic to allocate different numbers of slices to different VMs based on graphics processing requirements and/or priorities of each of the VMs.
-
公开(公告)号:US20220284539A1
公开(公告)日:2022-09-08
申请号:US17578125
申请日:2022-01-18
Applicant: Intel Corporation
Inventor: Hema Chand NALLURI , Balaji VEMBU , Peter DOYLE , Michael APODACA
Abstract: Various embodiments enable loop processing in a command processing block of the graphics hardware. Such hardware may include a processor including a command buffer, and a graphics command parser. The graphics command parser to load graphics commands from the command buffer, parse a first graphics command, store a loop count value associated with the first graphics command, parse a second graphics command and store a loop wrap address based on the second graphics command. The graphics command parser may execute a command sequence identified by the second graphics command, parse a third graphics command, the third graphics command identifying an end of the command sequence, set a new loop count value, and iteratively execute the command sequence using the loop wrap address based on the new loop count value.
-
公开(公告)号:US20220277412A1
公开(公告)日:2022-09-01
申请号:US17695591
申请日:2022-03-15
Applicant: Intel Corporation
Inventor: Joydeep RAY , Abhishek R. APPU , Altug KOKER , Balaji VEMBU
IPC: G06T1/20 , G06F12/0811 , G06F12/0815 , G06F12/0831 , G06F12/0888 , G06F12/0875 , G06T1/60
Abstract: An apparatus and method are described for managing data which is biased towards a processor or a GPU. For example, an apparatus comprises a processor comprising one or more cores, one or more cache levels, and cache coherence controllers to maintain coherent data in the one or more cache levels; a graphics processing unit (GPU) to execute graphics instructions and process graphics data, wherein the GPU and processor cores are to share a virtual address space for accessing a system memory; a GPU memory addressable through the virtual address space shared by the processor cores and GPU; and bias management circuitry to store an indication for whether the data has a processor bias or a GPU bias, wherein if the data has a GPU bias, the data is to be accessed by the GPU without necessarily accessing the processor's cache coherence controllers.
-
公开(公告)号:US20210271539A1
公开(公告)日:2021-09-02
申请号:US17171790
申请日:2021-02-09
Applicant: Intel Corporation
Inventor: Balaji VEMBU , Bryan WHITE , Ankur SHAH , Murali RAMADOSS , David PUFFER , Altug KOKER , Aditya NAVALE , Mahesh NATU
IPC: G06F11/07
Abstract: Apparatus and method for scalable error reporting. For example, one embodiment of an apparatus comprises error detection circuitry to detect an error in a component of a first tile within a tile-based hierarchy of a processing device; error classification circuitry to classify the error and record first error data based on the classification; a first tile interface to combine the first error data with second error data received from one or more other components associated with the first tile to generate first accumulated error data; and a master tile interface to combine the first accumulated error data with second accumulated error data received from at least one other tile interface to generate second accumulated error data and to provide the second accumulated error data to a host executing an application to process the second accumulated error data.
-
-
-
-