-
公开(公告)号:US20230205606A1
公开(公告)日:2023-06-29
申请号:US17922277
申请日:2021-03-26
Applicant: Intel Corporation
Inventor: Stephen Palermo , Neelam Chandwani , Kshitij Doshi , Chetan Hiremath , Rajesh Gadiyar , Udayan Mukherjee , Daniel Towner , Valerie Parker , Shubha Bommalingaiahnapallya , Rany ElSayed
IPC: G06F9/50
CPC classification number: G06F9/5094 , G06F9/505 , G06F9/5044
Abstract: Systems, apparatus, and methods to workload optimize hardware are disclosed herein. An example apparatus includes power control circuitry to determine an application ratio based on an instruction to be executed by one or more cores of a processor to execute a workload, and configure, before the execution of the workload, at least one of (i) the one or more cores of the processor based on the application ratio or (ii) uncore logic of the processor based on the application ratio, and execution circuitry to execute the workload with the at least one of the one or more cores or the uncore logic.
-
22.
公开(公告)号:US11243817B2
公开(公告)日:2022-02-08
申请号:US16369036
申请日:2019-03-29
Applicant: Intel Corporation
Inventor: Evan Custodio , Francesc Guim Bernat , Suraj Prabhakaran , Trevor Cooper , Ned M. Smith , Kshitij Doshi , Petar Torre
Abstract: Technologies for migrating data between edge accelerators hosted on different edge locations include a device hosted on a present edge location. The device includes one or more processors to: receive a workload from a requesting device, determine one or more accelerator devices hosted on the present edge location to perform the workload, and transmit the workload to the one or more accelerator devices to process the workload. The one or more processor is further to determine whether to perform data migration from the one or more accelerator devices to one or more different edge accelerator devices hosted on a different edge location, and send, in response to a determination to perform the data migration, a request to the one or more accelerator devices on the present edge location for transformed workload data to be processed by the one or more different edge accelerator devices.
-
23.
公开(公告)号:US20220014591A1
公开(公告)日:2022-01-13
申请号:US17353142
申请日:2021-06-21
Applicant: Intel Corporation
Inventor: Tao Zhong , Gang Deng , Zhongyan Lu , Kshitij Doshi
Abstract: Methods and apparatus to adaptively manage data collection devices in distributed computing systems are disclosed. Example disclosed methods involve instructing a first data collection device to operate according to a first rule. The example first rule specifies a first operating mode and defining a first event of interest. Example disclosed methods also involve obtaining first data from the first data collection device while operating according to the first rule. Example disclosed methods also involve, in response to determining that the first event of interest has occurred based on the first data, providing a second rule based on the first data to the first data collection device, and providing a third rule to a second data collection device. The example second rule specifies a second operating mode and defines a second event of interest, and the examples third rule specifies a third operating mode.
-
公开(公告)号:US11157642B2
公开(公告)日:2021-10-26
申请号:US16143724
申请日:2018-09-27
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Mark Schmisseur , Kshitij Doshi , Kapil Sood , Tarun Viswanathan
Abstract: An embodiment of a semiconductor apparatus may include technology to receive data with a unique identifier, and bypass encryption logic of a media controller based on the unique identifier. Other embodiments are disclosed and claimed.
-
公开(公告)号:US20210328886A1
公开(公告)日:2021-10-21
申请号:US17359349
申请日:2021-06-25
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Kshitij Doshi , Marcos Carranza , Thijs Metsch , Adrian Hoban
Abstract: Example methods, apparatus, and systems to facilitate service proxying are disclosed. An example apparatus includes interface circuitry to access a service request intercepted by from an infrastructure processing unit, the service request corresponding to a first node; instructions in the apparatus; and infrastructure sidecar circuitry to execute the instructions to: identify an active service instance corresponding to the service request; compare first telemetry data corresponding to the active service instance to a service quality metric; select a second node to service the service request based on the comparison and further telemetry data; and cause transmission of the service request to the second node.
-
公开(公告)号:US11093398B2
公开(公告)日:2021-08-17
申请号:US16457826
申请日:2019-06-28
Applicant: Intel Corporation
Inventor: Kshitij Doshi , Harald Servat , Francesc Guim Bernat
IPC: G06F12/0831 , G06F9/54 , G06F12/0842
Abstract: Embodiments may include systems and methods for performing remote memory operations in a shared memory address space. An apparatus includes a first network controller coupled to a first processor core. The first network controller processes a remote memory operation request, which is generated by a first memory coherency agent based on a first memory operation for an application operating on the first processor core. The remote memory operation request is associated with a remote memory address that is local to a second processor core coupled to the first processor core. The first network controller forwards the remote memory operation request to a second network controller coupled to the second processor core. The second processor core and the second network controller are to carry out a second memory operation to extend the first memory operation as a remote memory operation. Other embodiments may be described and/or claimed.
-
公开(公告)号:US20200320003A1
公开(公告)日:2020-10-08
申请号:US16907729
申请日:2020-06-22
Applicant: Intel Corporation
Inventor: Vadim Sukhomlinov , Kshitij Doshi
IPC: G06F12/0804 , G06F12/0875 , G06F12/0891
Abstract: The present disclosure is directed to systems and methods that include cache operation storage circuitry that selectively enables/disables the Cache Line Flush (CLFLUSH) operation. The cache operation storage circuitry may also selectively replace the CLFLUSH operation with one or more replacement operations that provide similar functionality but beneficially and advantageously prevent an attacker from placing processor cache circuitry in a known state during a timing-based, side channel attack such as Spectre or Meltdown. The cache operation storage circuitry includes model specific registers (MSRs) that contain information used to determine whether to enable/disable CLFLUSH functionality. The cache operation storage circuitry may include model specific registers (MSRs) that contain information used to select appropriate replacement operations such as Cache Line Demote (CLDEMOTE) and/or Cache Line Write Back (CLWB) to selectively replace CLFLUSH operations.
-
公开(公告)号:US20200302301A1
公开(公告)日:2020-09-24
申请号:US16894535
申请日:2020-06-05
Applicant: Intel Corporation
Inventor: Glen J. Anderson , Rajesh Poornachandran , Ignacio Alvarez , Giuseppe Raffa , Jill Boyce , Ankur Agrawal , Kshitij Doshi
Abstract: Logic may determine a specific performance of a neural network based on an event and may present the specific performance to provide a user with an explanation of the inference by a machine learning model such as a neural network. Logic may determine a first activation profile associated with the event, the first activation profile based on activation of nodes in one or more layers of the neural network during inference to generate an output. Logic may correlate the first activation profile against a second activation profile associated with a first training sample of training data. Logic may determine that the first training sample is associated with the event based on the correlation. Logic may output an indicator to identify the first training sample as being associated with the event.
-
公开(公告)号:US10713173B2
公开(公告)日:2020-07-14
申请号:US16123818
申请日:2018-09-06
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Kshitij Doshi
IPC: G06F12/12 , G06F12/0862
Abstract: Embodiments of the present disclosure relate to a controller that includes a monitor to determine an access pattern for a range of memory of a first computer memory device, and a pre-loader to pre-load a second computer memory device with a copy of a subset of the range of memory based at least in part on the access pattern, wherein the subset includes a plurality of cache lines. In some embodiments, the controller includes a specifier and the monitor determines the access pattern based at least in part on one or more configuration elements in the specifier. Other embodiments may be described and/or claimed.
-
公开(公告)号:US10534710B2
公开(公告)日:2020-01-14
申请号:US16015880
申请日:2018-06-22
Applicant: Intel Corporation
Inventor: Kshitij Doshi , Bhanu Shankar
IPC: G06F12/0897 , G06F12/0804 , G06F12/084 , G06F12/126 , G06F12/0868 , G06F12/0873
Abstract: In embodiments, an apparatus may include a CC, and a LLC coupled to the CC, the CC to reserve a defined portion of the LLC where data objects whose home location is in a NVM are given placement priority. In embodiments, the apparatus may be further coupled to at least one lower level cache and a second LLC, wherein the CC may further identify modified data objects in the at least one lower level cache whose home location is in a second NVM, and in response to the identification, cause the modified data objects to be written from the lower level cache to the second LLC, the second LLC located in a same socket as the second NVM.
-
-
-
-
-
-
-
-
-