System, Apparatus And Methods For Handling Consistent Memory Transactions According To A CXL Protocol

    公开(公告)号:US20210349840A1

    公开(公告)日:2021-11-11

    申请号:US17443379

    申请日:2021-07-26

    Abstract: In one embodiment, an apparatus includes: an interface to couple a plurality of devices of a system and enable communication according to a Compute Express Link (CXL) protocol. The interface may receive a consistent memory request having a type indicator to indicate a type of consistency to be applied to the consistent memory request. A request scheduler coupled to the interface may receive the consistent memory request and schedule it for execution according to the type of consistency, based at least in part on a priority of the consistent memory request and one or more pending consistent memory requests. Other embodiments are described and claimed.

    SYSTEMS, APPARATUS, AND METHODS FOR EDGE DATA PRIORITIZATION

    公开(公告)号:US20210328934A1

    公开(公告)日:2021-10-21

    申请号:US17359204

    申请日:2021-06-25

    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for edge data prioritization. An example apparatus includes at least one memory, instructions, and processor circuitry to at least one of execute or instantiate the instructions to identify an association of a data packet with a data stream based on one or more data stream parameters included in the data packet corresponding to the data stream, the data packet associated with a first priority, execute a model based on the one or more data stream parameters to generate a model output, determine a second priority of at least one of the data packet or the data stream based on the model output, the model output indicative of an adjustment of the first priority to the second priority, and cause transmission of at least one of the data packet or the data stream based on the second priority.

    Data management for edge architectures

    公开(公告)号:US11093287B2

    公开(公告)日:2021-08-17

    申请号:US16422905

    申请日:2019-05-24

    Abstract: Data management for edge architected computing systems extends current storage and memory schemes of edge resources to expose interfaces to allow a device, such as an endpoint or client device, or another edge resource, to specify criteria for managing data originating from the device and stored in an edge resource, and extends the storage and memory controllers to manage data in accordance with the criteria, including removing stored data that no longer satisfies the criteria. The criteria includes a temporal hint to specify a time after which the data can be removed, a physical hint to specify a list of edge resources outside of which the data can be removed, an event-based hint to specify an event after which the data can be removed, and a quality of service condition to modify the time specified in the temporal hint based on a condition, such as memory and storage capacity of the edge resource in which the data is managed.

    SCALABLE EDGE COMPUTING
    74.
    发明申请

    公开(公告)号:US20210194821A1

    公开(公告)日:2021-06-24

    申请号:US17195409

    申请日:2021-03-08

    Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (AI) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the AI circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.

    Arbitration across shared memory pools of disaggregated memory devices

    公开(公告)号:US10649813B2

    公开(公告)日:2020-05-12

    申请号:US15929005

    申请日:2018-03-29

    Abstract: Technology for a memory pool arbitration apparatus is described. The apparatus can include a memory pool controller (MPC) communicatively coupled between a shared memory pool of disaggregated memory devices and a plurality of compute resources. The MPC can receive a plurality of data requests from the plurality of compute resources. The MPC can assign each compute resource to one of a set of compute resource priorities. The MPC can send memory access commands to the shared memory pool to perform each data request prioritized according to the set of compute resource priorities. The apparatus can include a priority arbitration unit (PAU) communicatively coupled to the MPC. The PAU can arbitrate the plurality of data requests as a function of the corresponding compute resource priorities.

    Adaptive fabric multicast schemes
    79.
    发明授权

    公开(公告)号:US10608956B2

    公开(公告)日:2020-03-31

    申请号:US14973155

    申请日:2015-12-17

    Abstract: Described herein are devices and techniques for distributing application data. A device can communicate with one or more hardware switches. The device can receive, from a software stack, a multicast message including a constraint that indicates how application data is to be distributed. The constraint including a listing of the set of nodes and a number of nodes to which the application data is to be distributed. The device may receive, from the software stack, the application data for distribution to a plurality of nodes. The plurality of nodes being a subset of the set of nodes equaling the number of nodes. The device may select the plurality of nodes from the set of nodes. The device also may distribute a copy of the application data to the plurality of nodes based on the constraint. Also described are other embodiments.

    Techniques to perform memory indirection for memory architectures

    公开(公告)号:US10509728B2

    公开(公告)日:2019-12-17

    申请号:US15719618

    申请日:2017-09-29

    Abstract: Various embodiments are generally directed to an apparatus, method and other techniques to receive a request from a core, the request associated with a memory operation to read or write data, and the request comprising a first address and an offset, the first address to identify a memory location of a memory. Embodiments include performing a first iteration of a memory indirection operation comprising reading the memory at the memory location to determine a second address based on the first address, and determining a memory resource based on the second address and the offset, the memory resource to perform the memory operation for the computing resource or perform a second iteration of the memory indirection operation.

Patent Agency Ranking