Quality of service techniques in distributed graphics processor

    公开(公告)号:US12265844B2

    公开(公告)日:2025-04-01

    申请号:US17468312

    申请日:2021-09-07

    Applicant: Apple Inc.

    Abstract: Disclosed techniques relate to circuitry configured to aggregate and report usage information in a distributed processor (e.g., a GPU). In some embodiments, graphics processor circuitry that includes at least first and second portions that are respectively configured to execute sets of graphics work. First utilization circuitry may track execution time for sets of graphics work on the first portion of the graphics processor circuitry and second utilization circuitry may track execution time for sets of graphics work on the second portion of the graphics processor circuitry. Command queue circuitry may store multiple different command queues. Control circuitry may access the first and second utilization circuitry and aggregate utilization data on a per-command-queue basis, where for a given command queue, the aggregated utilization data indicates respective utilization of the first and second portions of the graphics processor circuitry. The control circuitry may provide the aggregated per-command-queue utilization data in software-accessible registers.

    Quality of Service Techniques in Distributed Graphics Processor

    公开(公告)号:US20230075531A1

    公开(公告)日:2023-03-09

    申请号:US17468312

    申请日:2021-09-07

    Applicant: Apple Inc.

    Abstract: Disclosed techniques relate to circuitry configured to aggregate and report usage information in a distributed processor (e.g., a GPU). In some embodiments, graphics processor circuitry that includes at least first and second portions that are respectively configured to execute sets of graphics work. First utilization circuitry may track execution time for sets of graphics work on the first portion of the graphics processor circuitry and second utilization circuitry may track execution time for sets of graphics work on the second portion of the graphics processor circuitry. Command queue circuitry may store multiple different command queues. Control circuitry may access the first and second utilization circuitry and aggregate utilization data on a per-command-queue basis, where for a given command queue, the aggregated utilization data indicates respective utilization of the first and second portions of the graphics processor circuitry. The control circuitry may provide the aggregated per-command-queue utilization data in software-accessible registers.

    Graphics Hardware Driven Pause for Quality of Service Adjustment

    公开(公告)号:US20200104180A1

    公开(公告)日:2020-04-02

    申请号:US16145573

    申请日:2018-09-28

    Applicant: Apple Inc.

    Abstract: In general, embodiments are disclosed for tracking and allocating graphics processor hardware resources. More particularly, a graphics hardware resource allocation system is able to generate a priority list for a plurality of data masters for graphics processor based on a comparison between a current utilizations for the data masters and a target utilizations for the data masters. The graphics hardware resource allocation system designate, based on the priority list, a first data master with a higher priority to submit work to the graphics processor compared to a second data master. The graphics hardware resource allocation system determines a stall counter value for the data master and generates a notification to pause work for the second data master based on the stall counter value.

    Starvation free scheduling of prioritized workloads on the GPU

    公开(公告)号:US09747659B2

    公开(公告)日:2017-08-29

    申请号:US14851629

    申请日:2015-09-11

    Applicant: Apple Inc.

    Abstract: Embodiments are directed toward systems and methods for scheduling resources of a graphics processing unit that determine, for a number of applications having commands to be issued to the GPU, a static priority level and a dynamic priority level of each application, work iteratively across static priority levels until a resource budget of the GPU is consumed, and starting with a highest static priority identify the applications in a present static priority level, assign a processing budget of the GPU to each of the applications in the present static priority level according to their dynamic priority levels, and admit to a queue commands from the applications in the present static priority level according to their processing budgets, and release the queue to the GPU.

    Priority inversion mitigation techniques

    公开(公告)号:US12039368B2

    公开(公告)日:2024-07-16

    申请号:US17468328

    申请日:2021-09-07

    Applicant: Apple Inc.

    Abstract: Disclosed techniques relate to distributing graphics work based on priority. In some embodiments, circuitry implements a plurality of tracking slots for sets of graphics work. A set of graphics processor sub-units may each implement multiple distributed hardware slots. Control circuitry may attempt to assign a first set of graphics work having a first priority to a graphics processor sub-unit that is currently executing graphics work having an equal or higher priority than the first priority, where the first set of graphics work is from a first tracking slot. The control circuitry may, in response to a failure of the attempt, generate a signal to graphics software that indicates the failure, wherein the signal indicates the first tracking slot. Disclosed techniques may reduce or avoid problems relating to higher priority work being scheduled behind lower priority work.

    Execution graph acceleration
    8.
    发明授权

    公开(公告)号:US11436055B2

    公开(公告)日:2022-09-06

    申请号:US16688487

    申请日:2019-11-19

    Applicant: Apple Inc.

    Abstract: A first command is fetched for execution on a GPU. Dependency information for the first command, which indicates a number of parent commands that the first command depends on, is determined. The first command is inserted into an execution graph based on the dependency information. The execution graph defines an order of execution for plural commands including the first command. The number of parent commands are configured to be executed on the GPU before executing the first command. A wait count for the first command, which indicates the number of parent commands of the first command, is determined based on the execution graph. The first command is inserted into cache memory in response to determining that the wait count for the first command is zero or that each of the number of parent commands the first command depends on has already been inserted into the cache memory.

    Graphics hardware driven pause for quality of service adjustment

    公开(公告)号:US10795730B2

    公开(公告)日:2020-10-06

    申请号:US16145573

    申请日:2018-09-28

    Applicant: Apple Inc.

    Abstract: In general, embodiments are disclosed for tracking and allocating graphics processor hardware resources. More particularly, a graphics hardware resource allocation system is able to generate a priority list for a plurality of data masters for graphics processor based on a comparison between a current utilizations for the data masters and a target utilizations for the data masters. The graphics hardware resource allocation system designate, based on the priority list, a first data master with a higher priority to submit work to the graphics processor compared to a second data master. The graphics hardware resource allocation system determines a stall counter value for the data master and generates a notification to pause work for the second data master based on the stall counter value.

Patent Agency Ranking