Invention Grant
- Patent Title: Using per memory bank load caches for reducing power use in a system on a chip
-
Application No.: US18069722Application Date: 2022-12-21
-
Publication No.: US12093539B2Publication Date: 2024-09-17
- Inventor: Ching-Yu Hung , Ravi P Singh , Jagadeesh Sankaran , Yen-Te Shih , Ahmad Itani
- Applicant: NVIDIA Corporation
- Applicant Address: US CA Santa Clara
- Assignee: NVIDIA Corporation
- Current Assignee: NVIDIA Corporation
- Current Assignee Address: US CA Santa Clara
- Agency: Taylor English Duma L.L.P.
- Main IPC: G06F12/00
- IPC: G06F12/00 ; G06F3/06 ; G06F12/0802

Abstract:
In various examples, a VPU and associated components may be optimized to improve VPU performance and throughput. For example, the VPU may include a min/max collector, automatic store predication functionality, a SIMD data path organization that allows for inter-lane sharing, a transposed load/store with stride parameter functionality, a load with permute and zero insertion functionality, hardware, logic, and memory layout functionality to allow for two point and two by two point lookups, and per memory bank load caching capabilities. In addition, decoupled accelerators may be used to offload VPU processing tasks to increase throughput and performance, and a hardware sequencer may be included in a DMA system to reduce programming complexity of the VPU and the DMA system. The DMA and VPU may execute a VPU configuration mode that allows the VPU and DMA to operate without a processing controller for performing dynamic region based data movement operations.
Public/Granted literature
- US20230124604A1 USING PER MEMORY BANK LOAD CACHES FOR REDUCING POWER USE IN A SYSTEM ON A CHIP Public/Granted day:2023-04-20
Information query