Cooperative workgroup scheduling and context prefetching based on predicted modification of signal values

    公开(公告)号:US11481250B2

    公开(公告)日:2022-10-25

    申请号:US16024244

    申请日:2018-06-29

    Abstract: A first workgroup is preempted in response to threads in the first workgroup executing a first wait instruction including a first value of a signal and a first hint indicating a type of modification for the signal. The first workgroup is scheduled for execution on a processor core based on a first context after preemption in response to the signal having the first value. A second workgroup is scheduled for execution on the processor core based on a second context in response to preempting the first workgroup and in response to the signal having a second value. A third context it is prefetched into registers of the processor core based on the first hint and the second value. The first context is stored in a first portion of the registers and the second context is prefetched into a second portion of the registers prior to preempting the first workgroup.

    PARALLEL PROCESSING FOR SPARSE MATRIX LINEAR ALGEBRA

    公开(公告)号:US20250123846A1

    公开(公告)日:2025-04-17

    申请号:US18485502

    申请日:2023-10-12

    Abstract: A processing unit includes a plurality of processing cores and is configured to arrange a sparse matrix for parallel performance by the cores on different rows of the matrix at least in part by calculating a respective quantity of non-zero elements in each row, assigning each row to a respective collection according to the respective quantity of non-zero elements for the row, wherein the processing unit is configured to assign at least one first row of the sparse matrix to respective collections of in parallel with assigning at least one second row of the sparse matrix to respective collections, and performing at least one mathematical operation on at least a first collection of the plurality of collections in parallel with performing the at least one mathematical operation on at least a second collection of the plurality of collections.

    SYSTEMS AND METHODS FOR DYNAMIC RESOURCE MANAGEMENT

    公开(公告)号:US20250103395A1

    公开(公告)日:2025-03-27

    申请号:US18476071

    申请日:2023-09-27

    Abstract: A computer-implemented method for dynamic resource management can include evaluating, by at least one processor, whether a priority of one or more processes associated with a request for one or more shared resources meets a threshold condition. The method can additionally include determining, by the at least one processor and in response to an evaluation that the priority meets the threshold condition, whether the one or more shared resources is available to meet the request. The method can further include completing, by the at least one processor and in response to a determination that the one or more shared resources is available, execution of the one or more processes. Various other methods, systems, and computer-readable media are also disclosed.

    Method and apparatus for inter-lane thread migration

    公开(公告)号:US10409610B2

    公开(公告)日:2019-09-10

    申请号:US15010093

    申请日:2016-01-29

    Abstract: Briefly, methods and apparatus to migrate a software thread from one wavefront executing on one execution unit to another wavefront executing on another execution unit whereby both execution units are associated with a compute unit of a processing device such as, for example, a GPU. The methods and apparatus may execute compiled dynamic thread migration swizzle buffer instructions that when executed allow access to a dynamic thread migration swizzle buffer that allows for the migration of register context information when migrating software threads. The register context information may be located in one or more locations of a register file prior to storing the register context information into the dynamic thread migration swizzle buffer. The method and apparatus may also return the register context information from the dynamic thread migration swizzle buffer to one or more different register file locations of the register file.

    DATA REMAPPING FOR HETEROGENEOUS PROCESSOR
    9.
    发明申请
    DATA REMAPPING FOR HETEROGENEOUS PROCESSOR 审中-公开
    异构处理器的数据重新取代

    公开(公告)号:US20150106587A1

    公开(公告)日:2015-04-16

    申请号:US14055221

    申请日:2013-10-16

    Abstract: A processor remaps stored data and the corresponding memory addresses of the data for different processing units of a heterogeneous processor. The processor includes a data remap engine that changes the format of the data (that is, how the data is physically arranged in segments of memory) in response to a transfer of the data from system memory to a local memory hierarchy of an accelerated processing module (APM) of the processor. The APM's local memory hierarchy includes an address remap engine that remaps the memory addresses of the data at the local memory hierarchy so that the data can be accessed by routines at the APM that are unaware of the data remapping. By remapping the data, and the corresponding memory addresses, the APM can perform operations on the data more efficiently.

    Abstract translation: 处理器重新映射异构处理器的不同处理单元的存储数据和相应的数据存储器地址。 处理器包括响应于数据从系统存储器传输到加速处理模块的本地存储器层级而改变数据格式(即,数据在存储器段中物理布置的方式)的数据重映射引擎 (APM)。 APM的本地存储器层次结构包括地址重映射引擎,其重映射本地存储器层级上的数据的存储器地址,使得可以通过APM的不知道数据重映射的例程来访问数据。 通过重新映射数据和相应的存储器地址,APM可以更有效地对数据执行操作。

    Register based SIMD lookup table operations

    公开(公告)号:US12299445B2

    公开(公告)日:2025-05-13

    申请号:US17833504

    申请日:2022-06-06

    Abstract: An approach is provided for implementing register based single instruction, multiple data (SIMD) lookup table operations. According to the approach, an instruction set architecture (ISA) can support one or more SIMD instructions that enable vectors or multiple values in source data registers to be processed in parallel using a lookup table or truth table stored in one or more function registers. The SIMD instructions can be flexibly configured to support functions with inputs and outputs of various sizes and data formats. Various approaches are also described for supporting very large lookup tables that span multiple registers.

Patent Agency Ranking