-
71.
公开(公告)号:US20210349840A1
公开(公告)日:2021-11-11
申请号:US17443379
申请日:2021-07-26
Applicant: Intel Corporation
Inventor: Karthik Kumar , Francesc Guim Bernat
Abstract: In one embodiment, an apparatus includes: an interface to couple a plurality of devices of a system and enable communication according to a Compute Express Link (CXL) protocol. The interface may receive a consistent memory request having a type indicator to indicate a type of consistency to be applied to the consistent memory request. A request scheduler coupled to the interface may receive the consistent memory request and schedule it for execution according to the type of consistency, based at least in part on a priority of the consistent memory request and one or more pending consistent memory requests. Other embodiments are described and claimed.
-
公开(公告)号:US20210328934A1
公开(公告)日:2021-10-21
申请号:US17359204
申请日:2021-06-25
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Marcos Carranza , Rita Wouhaybi , Cesar Martinez-Spessot
IPC: H04L12/851 , H04L29/06
Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for edge data prioritization. An example apparatus includes at least one memory, instructions, and processor circuitry to at least one of execute or instantiate the instructions to identify an association of a data packet with a data stream based on one or more data stream parameters included in the data packet corresponding to the data stream, the data packet associated with a first priority, execute a model based on the one or more data stream parameters to generate a model output, determine a second priority of at least one of the data packet or the data stream based on the model output, the model output indicative of an adjustment of the first priority to the second priority, and cause transmission of at least one of the data packet or the data stream based on the second priority.
-
公开(公告)号:US11093287B2
公开(公告)日:2021-08-17
申请号:US16422905
申请日:2019-05-24
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Ramanathan Sethuraman , Karthik Kumar , Mark A. Schmisseur , Brinda Ganesh
Abstract: Data management for edge architected computing systems extends current storage and memory schemes of edge resources to expose interfaces to allow a device, such as an endpoint or client device, or another edge resource, to specify criteria for managing data originating from the device and stored in an edge resource, and extends the storage and memory controllers to manage data in accordance with the criteria, including removing stored data that no longer satisfies the criteria. The criteria includes a temporal hint to specify a time after which the data can be removed, a physical hint to specify a list of edge resources outside of which the data can be removed, an event-based hint to specify an event after which the data can be removed, and a quality of service condition to modify the time specified in the temporal hint based on a condition, such as memory and storage capacity of the edge resource in which the data is managed.
-
公开(公告)号:US20210194821A1
公开(公告)日:2021-06-24
申请号:US17195409
申请日:2021-03-08
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Mark A. Schmisseur , Timothy Verrall
IPC: H04L12/919 , H04L12/911 , G06F9/50 , G06N20/00
Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (AI) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the AI circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.
-
公开(公告)号:US10992556B2
公开(公告)日:2021-04-27
申请号:US15613944
申请日:2017-06-05
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Susanne M. Balle , Daniel Rivas Barragan , Rahul Khanna
IPC: H04L12/26 , H04L29/08 , H04L12/70 , G06F16/9535
Abstract: Particular embodiments described herein provide for a network element that can be configured to receive a request related to one or more disaggregated resources, link the one or more disaggregated resources to a local counter, receive performance related data from each of the one or more disaggregated resources, and store the performance related data in the local counter. In an example, the one or more disaggregated resources comprise a software defined infrastructure composite node
-
公开(公告)号:US10936039B2
公开(公告)日:2021-03-02
申请号:US16011842
申请日:2018-06-19
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Suraj Prabhakaran , Timothy Verrall , Karthik Kumar , Mark A. Schmisseur
IPC: G06F1/3234 , H04L29/08 , G06F9/50 , H04L12/24 , G06F1/3293 , G06F1/3206 , H04L12/26
Abstract: In one embodiment, an apparatus of an edge computing system includes memory that includes instructions and processing circuitry coupled to the memory. The processing circuitry implements the instructions to process a request to execute at least a portion of a workflow on pooled computing resources, the workflow being associated with a particular tenant, determine an amount of power to be allocated to particular resources of the pooled computing resources for execution of the portion of the workflow based on a power budget associated with the tenant and a current power cost, and control allocation of the determined amount of power to the particular resources of the pooled computing resources during execution of the portion of the workflow.
-
公开(公告)号:US10915791B2
公开(公告)日:2021-02-09
申请号:US15855891
申请日:2017-12-27
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Mark A. Schmisseur , Thomas Willhalm
Abstract: Technology for a memory controller is described. The memory controller can receive a request to store training data. The request can include a model identifier (ID) that identifies a model that is associated with the training data. The memory controller can send a write request to store the training data associated with the model ID in a memory region in a pooled memory that is allocated for the model ID. The training data that is stored in the memory region in the pooled memory can be addressable based on the model ID.
-
公开(公告)号:US10649813B2
公开(公告)日:2020-05-12
申请号:US15929005
申请日:2018-03-29
Applicant: Intel Corporation
Inventor: Mark A. Schmisseur , Francesc Guim Bernat , Andrew J. Herdrich , Karthik Kumar
Abstract: Technology for a memory pool arbitration apparatus is described. The apparatus can include a memory pool controller (MPC) communicatively coupled between a shared memory pool of disaggregated memory devices and a plurality of compute resources. The MPC can receive a plurality of data requests from the plurality of compute resources. The MPC can assign each compute resource to one of a set of compute resource priorities. The MPC can send memory access commands to the shared memory pool to perform each data request prioritized according to the set of compute resource priorities. The apparatus can include a priority arbitration unit (PAU) communicatively coupled to the MPC. The PAU can arbitrate the plurality of data requests as a function of the corresponding compute resource priorities.
-
公开(公告)号:US10608956B2
公开(公告)日:2020-03-31
申请号:US14973155
申请日:2015-12-17
Applicant: Intel Corporation
Inventor: Francesc Cesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Raj K. Ramanujan , Narayan Ranganathan
IPC: H04L12/931 , H04L1/00 , H04L12/18 , H04L12/26 , H04L12/927 , H04L12/24
Abstract: Described herein are devices and techniques for distributing application data. A device can communicate with one or more hardware switches. The device can receive, from a software stack, a multicast message including a constraint that indicates how application data is to be distributed. The constraint including a listing of the set of nodes and a number of nodes to which the application data is to be distributed. The device may receive, from the software stack, the application data for distribution to a plurality of nodes. The plurality of nodes being a subset of the set of nodes equaling the number of nodes. The device may select the plurality of nodes from the set of nodes. The device also may distribute a copy of the application data to the plurality of nodes based on the constraint. Also described are other embodiments.
-
公开(公告)号:US10509728B2
公开(公告)日:2019-12-17
申请号:US15719618
申请日:2017-09-29
Applicant: INTEL CORPORATION
Inventor: Francesc Guim Bernat , Karthik Kumar , Mark Schmisseur , Thomas Willhalm
Abstract: Various embodiments are generally directed to an apparatus, method and other techniques to receive a request from a core, the request associated with a memory operation to read or write data, and the request comprising a first address and an offset, the first address to identify a memory location of a memory. Embodiments include performing a first iteration of a memory indirection operation comprising reading the memory at the memory location to determine a second address based on the first address, and determining a memory resource based on the second address and the offset, the memory resource to perform the memory operation for the computing resource or perform a second iteration of the memory indirection operation.
-
-
-
-
-
-
-
-
-