POWER-AWARE, HISTORY-BASED GRAPHICS POWER OPTIMIZATION

    公开(公告)号:US20240211014A1

    公开(公告)日:2024-06-27

    申请号:US18146733

    申请日:2022-12-27

    CPC classification number: G06F1/324 G06F1/3218

    Abstract: Systems, apparatuses, and methods for implementing efficient power optimization in a computing system are disclosed. A system management unit records operating frequencies required for a computing component to execute a first task. The system management unit stores the recorded operating frequencies in a data array or any other predetermined memory location of a computing system. The system management unit uses the recorded operating frequencies to determine operating frequencies for execution of one or more other tasks.

    SHADER COMPILER AND SHADER PROGRAM CENTRIC MITIGATION OF CURRENT TRANSIENTS THAT CAUSE VOLTAGE TRANSIENTS ON A POWER RAIL

    公开(公告)号:US20240393861A1

    公开(公告)日:2024-11-28

    申请号:US18540703

    申请日:2023-12-14

    Abstract: An apparatus and method for efficiently managing voltage transients on a power rail caused by current transients of an integrated circuit. In various implementations, a computing system includes a processing circuit that executes instructions of a compiler that includes a current transients mitigator. When executing the instructions of the current transients mitigator, the processing circuit generates an estimate of a time rate of current flow being drawn from or returned to the power rail based on instruction types of a first sequence of instructions. Based on the estimate exceeds a threshold, the processing circuit replaces the first sequence of instructions with a second sequence of instructions that provides a smaller estimate. The second sequence is issued to the one or more compute circuits that utilize the power rail, rather than the first sequence.

    RUNTIME-LEARNING GRAPHICS POWER OPTIMIZATION

    公开(公告)号:US20240211019A1

    公开(公告)日:2024-06-27

    申请号:US18146776

    申请日:2022-12-27

    CPC classification number: G06F1/329

    Abstract: Systems, apparatuses, and methods for implementing runtime-learning graphics power optimization are illustrated. A system management unit monitors tasks queued for a computing component, such as a central processing unit (CPU) or a graphics processing unit (GPU). The system management unit computes a total number of clock cycles consumed to execute a first task. The system management unit then determines a second task for execution and modifies a current operating frequency by a given percentage while executing the second task. The system management unit determines the number of clock cycles that execution of the second task consumed and compares this to the number of clock cycles for the first task. Based at least in part on the comparison, the system management unit computes a performance sensitivity of tasks similar to the first and second tasks.

    System and method for identifying graphics workloads for dynamic allocation of resources among GPU shaders

    公开(公告)号:US10311626B2

    公开(公告)日:2019-06-04

    申请号:US15297611

    申请日:2016-10-19

    Abstract: A GPU filters graphics workloads to identify candidates for profiling. In response to receiving a graphics workload for the first time, the GPU determines if the graphics workload would require the GPU shaders to use fewer resources than would be spent profiling and determining a resource allocation for subsequent receipts of the same or a similar graphics workload. The GPU can further determine if the shaders are processing more than one graphics workload at the same time, such that the performance characteristics of each individual graphics workload cannot be effectively isolated. The GPU then profiles and stores resource allocations for a plurality of shaders for processing the filtered graphics workloads, and applies those stored resource allocations when the same or a similar graphics workload is received subsequently by the GPU.

    SYSTEM AND METHOD FOR IDENTIFYING GRAPHICS WORKLOADS FOR DYNAMIC ALLOCATION OF RESOURCES AMONG GPU SHADERS

    公开(公告)号:US20180108166A1

    公开(公告)日:2018-04-19

    申请号:US15297611

    申请日:2016-10-19

    CPC classification number: G06T15/005 G06F9/38

    Abstract: A GPU filters graphics workloads to identify candidates for profiling. In response to receiving a graphics workload for the first time, the GPU determines if the graphics workload would require the GPU shaders to use fewer resources than would be spent profiling and determining a resource allocation for subsequent receipts of the same or a similar graphics workload. The GPU can further determine if the shaders are processing more than one graphics workload at the same time, such that the performance characteristics of each individual graphics workload cannot be effectively isolated. The GPU then profiles and stores resource allocations for a plurality of shaders for processing the filtered graphics workloads, and applies those stored resource allocations when the same or a similar graphics workload is received subsequently by the GPU.

    LOAD BALANCING AT A GRAPHICS PROCESSING UNIT
    7.
    发明申请
    LOAD BALANCING AT A GRAPHICS PROCESSING UNIT 审中-公开
    图形处理单元的负载平衡

    公开(公告)号:US20160180487A1

    公开(公告)日:2016-06-23

    申请号:US14576828

    申请日:2014-12-19

    CPC classification number: G06T1/20 G06F9/44 G06F9/5083

    Abstract: A GPU of a processor performers load balancing by enabling and disabling CUs based on the GPU's processing load. A power control module identifies a current processing load of the GPU based on, for example, an activity level of one or more modules of the GPU. The power control module also identifies an expected future processing load of the GPU based on, for example, a number of threads (wavefronts) scheduled to be executed at the GPU. Based on a combination of the current processing load and the expected future processing load, the power control module sets the number of CUs of the GPU that are enabled and the number that are disabled (e.g. clock gated or power gated). By changing the number of enabled CUs based on processing load, the power control module maintains performance at the GPU while conserving power.

    Abstract translation: 处理器的GPU通过根据GPU的处理负载启用和禁用CU来执行负载平衡。 功率控制模块基于例如GPU的一个或多个模块的活动级别来识别GPU的当前处理负载。 功率控制模块还基于例如计划在GPU处执行的多个线程(波前)来识别GPU的预期未来处理负载。 基于当前处理负载和预期的未来处理负载的组合,功率控制模块设置启用的GPU的数量和被禁用的数量(例如时钟门控或电源门控)。 通过根据处理负载改变启用的CU的数量,功率控制模块在保持功率的同时保持GPU的性能。

Patent Agency Ranking