Device, system and method for adaptive payload compression in a network fabric

    公开(公告)号:US10728311B2

    公开(公告)日:2020-07-28

    申请号:US15434726

    申请日:2017-02-16

    Abstract: A computing device, method and system to implement an adaptive compression scheme in a network fabric. The computing device may include a memory device and a fabric controller coupled to the memory device. The fabric controller may include processing circuitry having logic to communicate with a plurality of peer computing devices in the network fabric. The logic may be configured to implement the adaptive compression scheme to select, based on static information and on dynamic information relating to a peer computing device of the plurality of peer computing devices, a compression algorithm to compress a data payload destined for the peer computing device, and to compress the data payload based on the compression algorithm. The static information may include information on data payload decompression supported methods of the peer computing device, and the dynamic information may include information on link load at the peer computing device. The compression may further take into consideration QoS requirements of the data payload. The computing device may send the data payload to the peer computing device after compressing.

    Systems, methods and apparatus for memory access and scheduling

    公开(公告)号:US10691345B2

    公开(公告)日:2020-06-23

    申请号:US15719729

    申请日:2017-09-29

    Abstract: A memory controller method and apparatus, which includes a modification of at least one of a first timing scheme or a second timing scheme based on information about one or more data requests to be included in at least one of a first queue scheduler or a second queue scheduler, the first timing scheme indicating when one or more requests in the first queue scheduler are to be issued to the first memory set via a first memory set interface and over a channel, the second timing scheme indicating when one or more requests in the second queue scheduler are to be issued to the second memory set via a second memory set interface and over the channel. Furthermore, an issuance of a request to at least one of the first memory set in accordance with the modified first timing scheme or the second memory set in accordance with the modified second timing scheme may be included.

    Systems, methods, and apparatuses for range protection

    公开(公告)号:US10547680B2

    公开(公告)日:2020-01-28

    申请号:US14983087

    申请日:2015-12-29

    Abstract: Systems, methods, and apparatuses for range protection. In some embodiments, an apparatus comprises at least one monitoring circuit to monitor for memory accesses to an address space and take action upon a violation to the address space, wherein the action is one of generating a notification to a node that requested the monitor, generating the wrong request, generate a notification in a specific context of the home node, and generating a notification in a node that has ownership of the address space; at least one a protection table to store an identifier of the address space; and at least one hardware core to execute an instruction to enable the monitoring circuit.

    SCALABLE EDGE COMPUTING
    134.
    发明申请

    公开(公告)号:US20200007460A1

    公开(公告)日:2020-01-02

    申请号:US16024465

    申请日:2018-06-29

    Abstract: There is disclosed in one example a communication apparatus, including: a telemetry interface; a management interface; and an edge gateway configured to: identify diverted traffic, wherein the diverted traffic includes traffic to be serviced by an edge microcloud configured to provide a plurality of services; receive telemetry via the telemetry interface; use the telemetry to anticipate a future per-service demand within the edge microcloud; compute a scale for a resource to meet the future per-service demand; and operate the management interface to instruct the edge microcloud to perform the scale before the future per-service demand occurs.

    DEEP LEARNING DATA MANIPULATION FOR MULTI-VARIABLE DATA PROVIDERS

    公开(公告)号:US20190228326A1

    公开(公告)日:2019-07-25

    申请号:US16367480

    申请日:2019-03-28

    Abstract: The disclosure is generally directed to systems in which numerous devices arranged to provide data are deployed. The system includes a source processing device arranged to received data from the data provider devices. The source processing data is arranged to process and/or store all or a part of the data based on whether the part of the data can be used to infer the rest of the data. The received data can be identified as either prediction data or response data. A data processing model can be used to generate inferred response data from the prediction data. Where the inferred response data is within an error threshold of the response data, then the prediction data can be stored. As such, the response data can be reproduced using the data processing model.

    Method for replicating data in a network and a network component

    公开(公告)号:US20190045005A1

    公开(公告)日:2019-02-07

    申请号:US15951211

    申请日:2018-04-12

    Abstract: A method for replicating data to one or more distributed network nodes of a network is proposed. A movement of a moving entity having associated data stored on a first node of the network is estimated. The moving entity is physically moving between nodes of the network. According to the method, at least a second node of the network depending on the estimated movement is chosen. The method contains replicating the associated data of the first node to the second node or a group of nodes and contains managing how data is stored at those nodes based on the moving entity.

    TECHNOLOGIES FOR ACCELERATING EDGE DEVICE WORKLOADS

    公开(公告)号:US20190044886A1

    公开(公告)日:2019-02-07

    申请号:US15941943

    申请日:2018-03-30

    Abstract: Technologies for accelerating edge device workloads at a device edge network include a network computing device which includes a processor platform that includes at least one processor which supports a plurality of non-accelerated function-as-a-service (FaaS) operations and an accelerated platform that includes at least one accelerator which supports a plurality of accelerated FaaS (AFaaS) operation. The network computing device is configured to receive a request to perform a FaaS operation, determine whether the received request indicates that an AFaaS operation is to be performed on the received request, and identify compute requirements for the AFaaS operation to be performed. The network computing device is further configured to select an accelerator platform to perform the identified AFaaS operation and forward the received request to the selected accelerator platform to perform the identified AFaaS operation. Other embodiments are described and claimed.

Patent Agency Ranking