SCALE UP AND OUT COMPRESSION
    22.
    发明申请

    公开(公告)号:US20230099093A1

    公开(公告)日:2023-03-30

    申请号:US17484782

    申请日:2021-09-24

    Abstract: A graphics processing apparatus includes graphics processors connected by a network connection, where the graphics processors pass compressed data. A first graphics processor stores data blocks as compressed data in a memory. The compressed data has data blocks of variable size, where a size of a block of compressed data depends on a compression ratio of the block of compressed data. A second graphics processor also stores data blocks as compressed data. The first graphics processor concatenates a variable number of blocks of compressed data into a packet of fixed size to send to the second graphics processor. The packet has a variable number of blocks of compressed data depending on the compression ratios of the multiple blocks of compressed data.

    APPARATUS AND METHOD FOR MANAGING DATA BIAS IN A GRAPHICS PROCESSING ARCHITECTURE

    公开(公告)号:US20220277412A1

    公开(公告)日:2022-09-01

    申请号:US17695591

    申请日:2022-03-15

    Abstract: An apparatus and method are described for managing data which is biased towards a processor or a GPU. For example, an apparatus comprises a processor comprising one or more cores, one or more cache levels, and cache coherence controllers to maintain coherent data in the one or more cache levels; a graphics processing unit (GPU) to execute graphics instructions and process graphics data, wherein the GPU and processor cores are to share a virtual address space for accessing a system memory; a GPU memory addressable through the virtual address space shared by the processor cores and GPU; and bias management circuitry to store an indication for whether the data has a processor bias or a GPU bias, wherein if the data has a GPU bias, the data is to be accessed by the GPU without necessarily accessing the processor's cache coherence controllers.

Patent Agency Ranking