METHODS AND SYSTEMS FOR INTER-PIPELINE DATA HAZARD AVOIDANCE

    公开(公告)号:US20250021340A1

    公开(公告)日:2025-01-16

    申请号:US18439700

    申请日:2024-02-12

    Abstract: Methods and parallel processing units for avoiding inter-pipeline data hazards identified at compile time. For each identified inter-pipeline data hazard the primary instruction and secondary instruction(s) thereof are identified as such and are linked by a counter which is used to track that inter-pipeline data hazard. When a primary instruction is output by the instruction decoder for execution the value of the counter associated therewith is adjusted to indicate that there is hazard related to the primary instruction, and when primary instruction has been resolved by one of multiple parallel processing pipelines the value of the counter associated therewith is adjusted to indicate that the hazard related to the primary instruction has been resolved. When a secondary instruction is output by the decoder for execution, the secondary instruction is stalled in a queue associated with the appropriate instruction pipeline if at least one counter associated with the primary instructions from which it depends indicates that there is a hazard related to the primary instruction.

    Scheduling tasks using targeted pipelines

    公开(公告)号:US11366691B2

    公开(公告)日:2022-06-21

    申请号:US17108526

    申请日:2020-12-01

    Abstract: A method of scheduling instructions within a parallel processing unit is described. The method comprises decoding, in an instruction decoder, an instruction in a scheduled task in an active state, and checking, by an instruction controller, if an ALU targeted by the decoded instruction is a primary instruction pipeline. If the targeted ALU is a primary instruction pipeline, a list associated with the primary instruction pipeline is checked to determine whether the scheduled task is already included in the list. If the scheduled task is already included in the list, the decoded instruction is sent to the primary instruction pipeline.

    Allocation of Memory Resources to SIMD Workgroups

    公开(公告)号:US20190087229A1

    公开(公告)日:2019-03-21

    申请号:US16132703

    申请日:2018-09-17

    Abstract: A memory subsystem for use with a single-instruction multiple-data (SIMD) processor comprising a plurality of processing units configured for processing one or more workgroups each comprising a plurality of SIMD tasks, the memory subsystem comprising: a shared memory partitioned into a plurality of memory portions for allocation to tasks that are to be processed by the processor; and a resource allocator configured to, in response to receiving a memory resource request for first memory resources in respect of a first-received task of a workgroup, allocate to the workgroup a block of memory portions sufficient in size for each task of the workgroup to receive memory resources in the block equivalent to the first memory resources.

    SORTING MEMORY ADDRESS REQUESTS FOR PARALLEL MEMORY ACCESS USING INPUT ADDRESS MATCH MASKS

    公开(公告)号:US20250156342A1

    公开(公告)日:2025-05-15

    申请号:US19022210

    申请日:2025-01-15

    Abstract: Apparatus identifies a set of M output memory addresses from a larger set of N input memory addresses containing a non-unique memory address. A comparator block performs comparisons of memory addresses from a set of N input memory addresses to generate a binary classification dataset that identifies a subset of addresses, where each address in the subset identified by the binary classification dataset is unique within that subset. Combination logic units receive a selection of bits of the binary classification dataset and sort its received selection of bits into an intermediary binary string in which the bits are ordered into a first group identifying addresses belonging to the identified subset, and a second group identifying addresses not belonging to the identified subset. Output generating logic selects between bits belonging to different intermediary binary strings to generate a binary output identifying a set of output memory addresses containing at least one address in the identified subset.

    Queues for inter-pipeline data hazard avoidance

    公开(公告)号:US11698790B2

    公开(公告)日:2023-07-11

    申请号:US17523633

    申请日:2021-11-10

    Abstract: Methods and parallel processing units for avoiding inter-pipeline data hazards identified at compile time. For each identified inter-pipeline data hazard the primary instruction and secondary instruction(s) thereof are identified as such and are linked by a counter which is used to track that inter-pipeline data hazard. When a primary instruction is output by the instruction decoder for execution the value of the counter associated therewith is adjusted to indicate that there is hazard related to the primary instruction, and when primary instruction has been resolved by one of multiple parallel processing pipelines the value of the counter associated therewith is adjusted to indicate that the hazard related to the primary instruction has been resolved. When a secondary instruction is output by the decoder for execution, the secondary instruction is stalled in a queue associated with the appropriate instruction pipeline if at least one counter associated with the primary instructions from which it depends indicates that there is a hazard related to the primary instruction.

Patent Agency Ranking