-
公开(公告)号:US20230206383A1
公开(公告)日:2023-06-29
申请号:US17561666
申请日:2021-12-23
Applicant: Intel Corporation
Inventor: Karol A. SZERSZEN , Prasoonkumar SURTI , Vidhya KRISHNAN , Aditya NAVALE , Abhishek R. APPU , Altug KOKER , Ronald W. SILVAS
IPC: G06T1/60 , G06T1/20 , G06F12/084
CPC classification number: G06T1/60 , G06T1/20 , G06F12/084 , G06F2212/401
Abstract: A system includes a compression engine that stores the compression format information embedded in the compressed data. The compression format information can be included in a header that includes compression control surface (CCS) information. The system includes a shared memory to store compressed data for multiple hardware pipelines, where blocks of the compressed data have a common memory footprint and the compression header. The compression engine can compress data to store in the shared memory including generation of the header. The compression engine can decompress data read from the shared memory, including identification of the compression format from the header.
-
公开(公告)号:US20230099093A1
公开(公告)日:2023-03-30
申请号:US17484782
申请日:2021-09-24
Applicant: Intel Corporation
Inventor: Karol A. SZERSZEN , Prasoonkumar SURTI , Abhishek R. APPU
Abstract: A graphics processing apparatus includes graphics processors connected by a network connection, where the graphics processors pass compressed data. A first graphics processor stores data blocks as compressed data in a memory. The compressed data has data blocks of variable size, where a size of a block of compressed data depends on a compression ratio of the block of compressed data. A second graphics processor also stores data blocks as compressed data. The first graphics processor concatenates a variable number of blocks of compressed data into a packet of fixed size to send to the second graphics processor. The packet has a variable number of blocks of compressed data depending on the compression ratios of the multiple blocks of compressed data.
-
公开(公告)号:US20220405877A1
公开(公告)日:2022-12-22
申请号:US17828411
申请日:2022-05-31
Applicant: INTEL CORPORATION
Inventor: Abhishek R. APPU , Joydeep RAY , Altug KOKER , Balaji VEMBU , Pattabhiraman K , Matthew B. CALLAWAY
Abstract: An apparatus and method for dynamic provisioning, quality of service, and prioritization in a graphics processor. For example, one embodiment of an apparatus comprises a graphics processing unit (GPU) comprising a plurality of graphics processing resources; slice configuration hardware logic to logically subdivide the graphics processing resources into a plurality of slices; and slice allocation hardware logic to allocate a designated number of slices to each virtual machine (VM) of a plurality of VMs running in a virtualized execution environment, the slice allocation hardware logic to allocate different numbers of slices to different VMs based on graphics processing requirements and/or priorities of each of the VMs.
-
公开(公告)号:US20220277412A1
公开(公告)日:2022-09-01
申请号:US17695591
申请日:2022-03-15
Applicant: Intel Corporation
Inventor: Joydeep RAY , Abhishek R. APPU , Altug KOKER , Balaji VEMBU
IPC: G06T1/20 , G06F12/0811 , G06F12/0815 , G06F12/0831 , G06F12/0888 , G06F12/0875 , G06T1/60
Abstract: An apparatus and method are described for managing data which is biased towards a processor or a GPU. For example, an apparatus comprises a processor comprising one or more cores, one or more cache levels, and cache coherence controllers to maintain coherent data in the one or more cache levels; a graphics processing unit (GPU) to execute graphics instructions and process graphics data, wherein the GPU and processor cores are to share a virtual address space for accessing a system memory; a GPU memory addressable through the virtual address space shared by the processor cores and GPU; and bias management circuitry to store an indication for whether the data has a processor bias or a GPU bias, wherein if the data has a GPU bias, the data is to be accessed by the GPU without necessarily accessing the processor's cache coherence controllers.
-
公开(公告)号:US20220129323A1
公开(公告)日:2022-04-28
申请号:US17339184
申请日:2021-06-04
Applicant: Intel Corporation
Inventor: James VALERIO , Vasanth RANGANATHAN , Joydeep RAY , Rahul A. KULKARNI , Abhishek R. APPU , Jeffery S. BOLES , Hema C. NALLURI
Abstract: Examples are described here that can be used to allocate commands from multiple sources to performance by one or more segments of a processing device. For example, a processing device can be segmented into multiple portions and each portion is allocated to process commands from a particular source. In the event a single source provides commands, the entire processing device (all segments) can be allocated to process commands from the single source. When a second source provides commands, some segments can be allocated to perform commands from the first source and other segments can be allocated to perform commands from the second source. Accordingly, commands from multiple applications can be executed by a processing unit at the same time.
-
公开(公告)号:US20220058765A1
公开(公告)日:2022-02-24
申请号:US17466591
申请日:2021-09-03
Applicant: Intel Corporation
Inventor: Abhishek R. APPU , Eric G. LISKAY , Prasoonkumar SURTI , Sudhakar KAMMA , Karthik VAIDYANATHAN , Rajasekhar PANTANGI , Altug KOKER , Abhishek RHISHEEKESAN , Shashank LAKSHMINARAYANA , Priyanka LADDA , Karol A. SZERSZEN
Abstract: Examples described herein relate to a decompression engine that can request compressed data to be transferred over a memory bus. In some cases, the memory bus is a width that requires multiple data transfers to transfer the requested data. In a case that requested data is to be presented in-order to the decompression engine, a re-order buffer can be used to store entries of data. When a head-of-line entry is received, the entry can be provided to the decompression engine. When a last entry in a group of one or more entries is received, all entries in the group are presented in-order to the decompression engine. In some examples, a decompression engine can borrow memory resources allocated for use by another memory client to expand a size of re-order buffer available for use. For example, a memory client with excess capacity and a slowest growth rate can be chosen to borrow memory resources from.
-
-
-
-
-