-
公开(公告)号:US10728311B2
公开(公告)日:2020-07-28
申请号:US15434726
申请日:2017-02-16
Applicant: Intel Corporation
Inventor: Karthik Kumar , Francesc Guim Bernat , Thomas Willhalm , Nicolas A. Salhuana , Daniel Rivas Barragan
Abstract: A computing device, method and system to implement an adaptive compression scheme in a network fabric. The computing device may include a memory device and a fabric controller coupled to the memory device. The fabric controller may include processing circuitry having logic to communicate with a plurality of peer computing devices in the network fabric. The logic may be configured to implement the adaptive compression scheme to select, based on static information and on dynamic information relating to a peer computing device of the plurality of peer computing devices, a compression algorithm to compress a data payload destined for the peer computing device, and to compress the data payload based on the compression algorithm. The static information may include information on data payload decompression supported methods of the peer computing device, and the dynamic information may include information on link load at the peer computing device. The compression may further take into consideration QoS requirements of the data payload. The computing device may send the data payload to the peer computing device after compressing.
-
公开(公告)号:US10691345B2
公开(公告)日:2020-06-23
申请号:US15719729
申请日:2017-09-29
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Mark Schmisseur
Abstract: A memory controller method and apparatus, which includes a modification of at least one of a first timing scheme or a second timing scheme based on information about one or more data requests to be included in at least one of a first queue scheduler or a second queue scheduler, the first timing scheme indicating when one or more requests in the first queue scheduler are to be issued to the first memory set via a first memory set interface and over a channel, the second timing scheme indicating when one or more requests in the second queue scheduler are to be issued to the second memory set via a second memory set interface and over the channel. Furthermore, an issuance of a request to at least one of the first memory set in accordance with the modified first timing scheme or the second memory set in accordance with the modified second timing scheme may be included.
-
公开(公告)号:US10547680B2
公开(公告)日:2020-01-28
申请号:US14983087
申请日:2015-12-29
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Raj K. Ramanujan , Robert G. Blankenship
Abstract: Systems, methods, and apparatuses for range protection. In some embodiments, an apparatus comprises at least one monitoring circuit to monitor for memory accesses to an address space and take action upon a violation to the address space, wherein the action is one of generating a notification to a node that requested the monitor, generating the wrong request, generate a notification in a specific context of the home node, and generating a notification in a node that has ownership of the address space; at least one a protection table to store an identifier of the address space; and at least one hardware core to execute an instruction to enable the monitoring circuit.
-
公开(公告)号:US20200007460A1
公开(公告)日:2020-01-02
申请号:US16024465
申请日:2018-06-29
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Mark A. Schmisseur , Timothy Verrall
IPC: H04L12/919 , H04L12/911 , G06F9/50 , G06F15/18
Abstract: There is disclosed in one example a communication apparatus, including: a telemetry interface; a management interface; and an edge gateway configured to: identify diverted traffic, wherein the diverted traffic includes traffic to be serviced by an edge microcloud configured to provide a plurality of services; receive telemetry via the telemetry interface; use the telemetry to anticipate a future per-service demand within the edge microcloud; compute a scale for a resource to meet the future per-service demand; and operate the management interface to instruct the edge microcloud to perform the scale before the future per-service demand occurs.
-
135.
公开(公告)号:US20190384516A1
公开(公告)日:2019-12-19
申请号:US16529533
申请日:2019-08-01
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , John Chun Kwok LEUNG , Mark Schmisseur , Thomas Willhalm
Abstract: The present disclosure relates to a dynamically composable computing system comprising a computing fabric with a plurality of different disaggregated computing hardware resources having respective hardware characteristics. A resource manager has access to the respective hardware characteristics of the different disaggregated computing hardware resources and is configured to assemble a composite computing node by selecting one or more disaggregated computing hardware resources with respective hardware characteristics meeting requirements of an application to be executed on the composite computing node. An orchestrator is configured to schedule the application using the assembled composite computing node.
-
公开(公告)号:US10509738B2
公开(公告)日:2019-12-17
申请号:US15201373
申请日:2016-07-01
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Narayan Ranganathan , Pete D. Vogt
IPC: G06F13/16 , G06F13/40 , G06F13/42 , H04L12/803 , H04L29/08 , G06F15/173 , H04L12/933
Abstract: An extension of node architecture and proxy requests enables a node to expose memory computation capability to remote nodes. A remote node can request execution of an operation by a remote memory computation resource, and the remote memory computation resource can execute the request locally and return the results of the computation. The node includes processing resources, a fabric interface, and a memory subsystem including a memory computation resource. The local execution of the request by the memory computation resource can reduce latency and bandwidth concerns typical with remote requests.
-
公开(公告)号:US20190228326A1
公开(公告)日:2019-07-25
申请号:US16367480
申请日:2019-03-28
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar
Abstract: The disclosure is generally directed to systems in which numerous devices arranged to provide data are deployed. The system includes a source processing device arranged to received data from the data provider devices. The source processing data is arranged to process and/or store all or a part of the data based on whether the part of the data can be used to infer the rest of the data. The received data can be identified as either prediction data or response data. A data processing model can be used to generate inferred response data from the prediction data. Where the inferred response data is within an error threshold of the response data, then the prediction data can be stored. As such, the response data can be reproduced using the data processing model.
-
公开(公告)号:US20190045005A1
公开(公告)日:2019-02-07
申请号:US15951211
申请日:2018-04-12
Applicant: Intel Corporation
Inventor: Timothy Verrall , Mark Schmisseur , Thomas Willhalm , Francesc Guim Bernat , Karthik Kumar
IPC: H04L29/08
Abstract: A method for replicating data to one or more distributed network nodes of a network is proposed. A movement of a moving entity having associated data stored on a first node of the network is estimated. The moving entity is physically moving between nodes of the network. According to the method, at least a second node of the network depending on the estimated movement is chosen. The method contains replicating the associated data of the first node to the second node or a group of nodes and contains managing how data is stored at those nodes based on the moving entity.
-
公开(公告)号:US20190044886A1
公开(公告)日:2019-02-07
申请号:US15941943
申请日:2018-03-30
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Anil Rao , Suraj Prabhakaran , Mohan Kumar , Karthik Kumar
IPC: H04L12/947 , H04L12/931 , H04L12/801 , H04L12/66
Abstract: Technologies for accelerating edge device workloads at a device edge network include a network computing device which includes a processor platform that includes at least one processor which supports a plurality of non-accelerated function-as-a-service (FaaS) operations and an accelerated platform that includes at least one accelerator which supports a plurality of accelerated FaaS (AFaaS) operation. The network computing device is configured to receive a request to perform a FaaS operation, determine whether the received request indicates that an AFaaS operation is to be performed on the received request, and identify compute requirements for the AFaaS operation to be performed. The network computing device is further configured to select an accelerator platform to perform the identified AFaaS operation and forward the received request to the selected accelerator platform to perform the identified AFaaS operation. Other embodiments are described and claimed.
-
公开(公告)号:US20190042490A1
公开(公告)日:2019-02-07
申请号:US15949095
申请日:2018-04-10
Applicant: Intel Corporation
Inventor: Mark Schmisseur , Thomas Willhalm , Francesc Guim Bernat , Karthik Kumar
Abstract: Examples provide a memory device, a dual inline memory module, a storage device, an apparatus for storing, a method for storing, a computer program, a machine readable storage, and a machine readable medium. A memory device is configured to store data and comprises one or more interfaces configured to receive and to provide data. The memory device further comprises a memory module configured to store the data, and a memory logic component configured to control the one or more interfaces and the memory module. The memory logic component is further configured to receive information on a specific memory region with one or more model identifications, to receive information on an instruction to perform an acceleration function for one or more certain model identifications, and to perform the acceleration function on data in a specific memory region with the one or more certain model identifications.
-
-
-
-
-
-
-
-
-