-
公开(公告)号:US20210287327A1
公开(公告)日:2021-09-16
申请号:US17182256
申请日:2021-02-23
Applicant: INTEL CORPORATION
Inventor: Abhishek R. APPU , Joydeep RAY , Altug KOKER , Balaji VEMBU , Pattabhiraman K , Matthew B. CALLAWAY
Abstract: An apparatus and method for dynamic provisioning, quality of service, and prioritization in a graphics processor. For example, one embodiment of an apparatus comprises a graphics processing unit (GPU) comprising a plurality of graphics processing resources; slice configuration hardware logic to logically subdivide the graphics processing resources into a plurality of slices; and slice allocation hardware logic to allocate a designated number of slices to each virtual machine (VM) of a plurality of VMs running in a virtualized execution environment, the slice allocation hardware logic to allocate different numbers of slices to different VMs based on graphics processing requirements and/or priorities of each of the VMs.
-
公开(公告)号:US20200320177A1
公开(公告)日:2020-10-08
申请号:US16792822
申请日:2020-02-17
Applicant: INTEL CORPORATION
Inventor: Joydeep RAY , Abhishek R. APPU , Pattabhiraman K , Balaji VEMBU , Altug KOKER
IPC: G06F21/10 , G06F12/14 , G06F12/0895 , G06F9/455 , G06F12/0815 , G06T15/00 , H04N19/00 , H04N21/4405
Abstract: An apparatus and method for protecting content in a graphics processor. For example, one embodiment of an apparatus comprises: encode/decode circuitry to decode protected audio and/or video content to generate decoded audio and/or video content; a graphics cache of a graphics processing unit (GPU) to store the decoded audio and/or video content; first protection circuitry to set a protection attribute for each cache line containing the decoded audio and/or video data in the graphics cache; a cache coherency controller to generate a coherent read request to the graphics cache; second protection circuitry to read the protection attribute to determine whether the cache line identified in the read request is protected, wherein if it is protected, the second protection circuitry to refrain from including at least some of the data from the cache line in a response.
-
3.
公开(公告)号:US20170228160A1
公开(公告)日:2017-08-10
申请号:US15408984
申请日:2017-01-18
Applicant: Intel Corporation
Inventor: Balaji VEMBU , Murali RAMADOSS
IPC: G06F3/06 , G11C14/00 , G06F13/16 , G06F13/40 , G06F12/1027 , G06F12/1009
CPC classification number: G06F3/0604 , G06F3/0631 , G06F3/0638 , G06F3/0656 , G06F3/0683 , G06F12/0292 , G06F12/1009 , G06F12/1027 , G06F13/1668 , G06F13/4072 , G06F2212/205 , G06F2212/68 , G06T1/60 , G11C14/0045 , G11C14/0081 , G11C14/009 , Y02D10/13
Abstract: A method, device, and system to distribute code and data stores between volatile and non-volatile memory are described. In one embodiment, the method includes storing one or more static code segments of a software application in a phase change memory with switch (PCMS) device, storing one or more static data segments of the software application in the PCMS device, and storing one or more volatile data segments of the software application in a volatile memory device. The method then allocates an address mapping table with at least a first address pointer to point to each of the one or more static code segments, at least a second address pointer to point to each of the one or more static data segments, and at least a third address pointer to point to each of the one or more volatile data segments.
-
公开(公告)号:US20240004713A1
公开(公告)日:2024-01-04
申请号:US18363339
申请日:2023-08-01
Applicant: Intel Corporation
Inventor: Abhishek R. APPU , Altug KOKER , Balaji VEMBU , Joydeep RAY , Kamal SINHA , Prasoonkumar SURTI , Kiran C. VEERNAPU , Subramaniam MAIYURAN , Sanjeev S. Jahagirdar , Eric J. Asperheim , Guei-Yuan Lueh , David Puffer , Wenyin Fu , Nikos Kaburlasos , Bhushan M. Borole , Josh B. Mastronarde , Linda L. Hurd , Travis T. Schluessler , Tomasz Janczak , Abhishek Venkatesh , Kai Xiao , Slawomir Grajewski
CPC classification number: G06F9/5016 , G06F9/5044 , G06F1/329 , G06F9/4893 , G06T1/20 , G06T1/60 , G06T15/005 , Y02D10/00 , G06T2200/28
Abstract: In an example, an apparatus comprises a plurality of execution units comprising at least a first type of execution unit and a second type of execution unit and logic, at least partially including hardware logic, to analyze a workload and assign the workload to one of the first type of execution unit or the second type of execution unit. Other embodiments are also disclosed and claimed.
-
公开(公告)号:US20220398147A1
公开(公告)日:2022-12-15
申请号:US17849356
申请日:2022-06-24
Applicant: Intel Corporation
Inventor: Balaji VEMBU , Bryan WHITE , Ankur SHAH , Murali RAMADOSS , David PUFFER , Altug KOKER , Aditya NAVALE , Mahesh NATU
IPC: G06F11/07
Abstract: Apparatus and method for scalable error reporting. For example, one embodiment of an apparatus comprises error detection circuitry to detect an error in a component of a first tile within a tile-based hierarchy of a processing device; error classification circuitry to classify the error and record first error data based on the classification; a first tile interface to combine the first error data with second error data received from one or more other components associated with the first tile to generate first accumulated error data; and a master tile interface to combine the first accumulated error data with second accumulated error data received from at least one other tile interface to generate second accumulated error data and to provide the second accumulated error data to a host executing an application to process the second accumulated error data.
-
6.
公开(公告)号:US20200278938A1
公开(公告)日:2020-09-03
申请号:US16700853
申请日:2019-12-02
Applicant: INTEL CORPORATION
Inventor: Balaji VEMBU , Altug KOKER , Joydeep RAY , Abhishek R. APPU , Pattabhiraman K , Niranjan L. COORAY
Abstract: An apparatus and method for dynamic provisioning and traffic control on a memory fabric. For example, one embodiment of an apparatus comprises: a graphics processing unit (GPU) comprising a plurality of graphics processing resources; slice configuration hardware logic to logically subdivide the graphics processing resources into a plurality of slices; and slice allocation hardware logic to allocate a designated set of slices to each virtual machine (VM) of a plurality of VMs running in a virtualized execution environment; and a plurality of queues associated with each VM at different levels of a memory interconnection fabric, the queues for a first VM to store memory traffic for that VM at the different levels of the memory interconnection fabric; arbitration hardware logic coupled to the plurality of queues and distributed across the different levels of the memory interconnection fabric, the arbitration hardware logic to cause memory traffic to be blocked from one or more upstream queues of the first VM upon detecting that a downstream queue associated with the first VM is full or at a specified threshold.
-
7.
公开(公告)号:US20200005427A1
公开(公告)日:2020-01-02
申请号:US16505555
申请日:2019-07-08
Applicant: INTEL CORPORATION
Inventor: Abhishek R. APPU , Joydeep RAY , Altug KOKER , Balaji VEMBU , Pattabhiraman K. , Matthew B. CALLAWAY
Abstract: An apparatus and method for dynamic provisioning, quality of service, and prioritization in a graphics processor. For example, one embodiment of an apparatus comprises a graphics processing unit (GPU) comprising a plurality of graphics processing resources; slice configuration hardware logic to logically subdivide the graphics processing resources into a plurality of slices; and slice allocation hardware logic to allocate a designated number of slices to each virtual machine (VM) of a plurality of VMs running in a virtualized execution environment, the slice allocation hardware logic to allocate different numbers of slices to different VMs based on graphics processing requirements and/or priorities of each of the VMs.
-
8.
公开(公告)号:US20160098148A1
公开(公告)日:2016-04-07
申请号:US14129427
申请日:2013-06-28
Applicant: INTEL CORPORATION
Inventor: Chaitanya R. GANDRA , Balaji VEMBU , Arvind A. KUMAR , Nilesh V. SHAH
CPC classification number: G06F3/0418 , G06F3/0416 , G06F3/0488 , G06F2203/04104 , G06T1/20
Abstract: Technologies for touch point detection include a computing device configured to receive input frames from a touch screen, identify touch point centroids and cluster boundaries, and track touch points. The computing device may group cells of the input frame into blocks. Using a processor graphics, the computing device may dispatch one thread per block to identify local maxima of the input frame and merge centroids within a touch distance threshold. The computing device may dispatch one thread per centroid to detect cluster boundaries. The computing device may dispatch one thread per previously identified touch point to assign an identifier of a previously tracked touch point to a touch point within a tracking distance threshold, remove duplicate identifiers, and assign unassigned identifiers to closest touch points. The computing device may dispatch one thread per block to assign unique identifiers to each unassigned touch point. Other embodiments are described and claimed.
Abstract translation: 用于触摸点检测的技术包括被配置为从触摸屏接收输入帧,识别触摸点中心和群集边界以及跟踪触摸点的计算设备。 计算设备可以将输入帧的单元分组成块。 使用处理器图形,计算设备可以每个块分派一个线程以识别输入帧的局部最大值,并在触摸距离阈值内合并质心。 计算设备可以调度每个质心一个线程来检测群集边界。 计算设备可以调度每个先前识别的触摸点的一个线程,以将跟踪的触摸点的标识符分配给跟踪距离阈值内的触摸点,去除重复的标识符,并将未分配的标识符分配给最接近的接触点。 计算设备可以每个块分派一个线程,以向每个未分配的触摸点分配唯一的标识符。 描述和要求保护其他实施例。
-
公开(公告)号:US20220309731A1
公开(公告)日:2022-09-29
申请号:US17839303
申请日:2022-06-13
Applicant: Intel Corporation
Inventor: Joydeep RAY , Abhishek R. APPU , Pattabhiraman K , Balaji VEMBU , Altug KOKER , Niranjan L. COORAY , Josh B. MASTRONARDE
Abstract: An apparatus and method are described for allocating local memories to virtual machines. For example, one embodiment of an apparatus comprises: a command streamer to queue commands from a plurality of virtual machines (VMs) or applications, the commands to be distributed from the command streamer and executed by graphics processing resources of a graphics processing unit (GPU); a tile cache to store graphics data associated with the plurality of VMs or applications as the commands are executed by the graphics processing resources; and tile cache allocation hardware logic to allocate a first portion of the tile cache to a first VM or application and a second portion of the tile cache to a second VM or application; the tile cache allocation hardware logic to further allocate a first region in system memory to store spill-over data when the first portion of the tile cache and/or the second portion of the file cache becomes full.
-
公开(公告)号:US20220084329A1
公开(公告)日:2022-03-17
申请号:US17539083
申请日:2021-11-30
Applicant: Intel Corporation
Inventor: Barath LAKSHAMANAN , Linda L. HURD , Ben J. ASHBAUGH , Elmoustapha OULD-AHMED-VALL , Liwei MA , Jingyi JIN , Justin E. GOTTSCHLICH , Chandrasekaran SAKTHIVEL , Michael S. STRICKLAND , Brian T. LEWIS , Lindsey KUPER , Altug KOKER , Abhishek R. APPU , Prasoonkumar SURTI , Joydeep RAY , Balaji VEMBU , Javier S. TUREK , Naila FAROOQUI
IPC: G07C5/00 , G05D1/00 , G08G1/01 , H04W28/08 , H04L29/08 , G06N20/00 , G06F9/50 , G01C21/34 , B60W30/00 , G06N3/04 , G06N3/063 , G06N3/08 , G06N20/10
Abstract: An autonomous vehicle is provided that includes one or more processors configured to provide a local compute manager to manage execution of compute workloads associated with the autonomous vehicle. The local compute manager can perform various compute operations, including receiving offload of compute operations from to other compute nodes and offloading compute operations to other compute notes, where the other compute nodes can be other autonomous vehicles. The local compute manager can also facilitate autonomous navigation functionality.
-
-
-
-
-
-
-
-
-