METHODS AND APPARATUS TO COORDINATE EDGE PLATFORMS

    公开(公告)号:US20210014133A1

    公开(公告)日:2021-01-14

    申请号:US17032993

    申请日:2020-09-25

    Abstract: Methods and apparatus to coordinate edge platforms are disclosed. A disclosed example apparatus includes to control processing of data associated with edges includes an orchestrator analyzer to determine a first performance requirement of a first microservice of an application and a second performance requirement of a second microservice of the application. The apparatus also includes an orchestrator controller to assign the first microservice and the second microservice across first and second edge nodes between a source network and a destination network by: assigning the first microservice to the first edge node based on a first capability of the first edge node satisfying the first performance requirement of the first microservice, and assigning the second microservice to the second edge node based on a second capability of the second edge node satisfying the second performance requirement of the second microservice.

    PERFORMANCE MANAGEMENT UNIT (PMU) AIDED TIER SELECTION IN HETEROGENEOUS MEMORY

    公开(公告)号:US20200310957A1

    公开(公告)日:2020-10-01

    申请号:US16370543

    申请日:2019-03-29

    Abstract: A processor including a processing core to execute an instruction prior to executing a memory allocation call; one or more last branch record (LBR) registers to store one or more recently retired branch instructions; a performance monitoring unit (PMU) comprising a logic circuit to: retrieve the one or more recently retired branch instructions from the one or more LBR registers; identify, based on the retired branch instructions, a signature of the memory allocation call; provide the signature to software to determine a memory tier to allocate memory for the memory allocation call.

    TECHNOLOGIES FOR MATCHING SECURITY REQUIREMENTS OF FUNCTION-AS-A-SERVICE SERVICES IN EDGE CLOUDS

    公开(公告)号:US20190230154A1

    公开(公告)日:2019-07-25

    申请号:US16369413

    申请日:2019-03-29

    Abstract: Technologies for matching security requirements for a function-as-a-service (FaaS) function request to an edge resource having security features matching the security requirements are disclosed. According to one embodiment of the present disclosure, an edge gateway device receives, from an edge device, a request to execute an accelerated function. The edge gateway device selects, as a function of one or more security requirements requested by the edge device, an edge resource to fulfill the request. The edge gateway device transmits the request to the edge resource to fulfill the request of the edge device, according to the one or more security requirements.

    TECHNOLOGIES FOR DATA MIGRATION BETWEEN EDGE ACCELERATORS HOSTED ON DIFFERENT EDGE LOCATIONS

    公开(公告)号:US20190227843A1

    公开(公告)日:2019-07-25

    申请号:US16369036

    申请日:2019-03-29

    Abstract: Technologies for migrating data between edge accelerators hosted on different edge locations include a device hosted on a present edge location. The device includes one or more processors to: receive a workload from a requesting device, determine one or more accelerator devices hosted on the present edge location to perform the workload, and transmit the workload to the one or more accelerator devices to process the workload. The one or more processor is further to determine whether to perform data migration from the one or more accelerator devices to one or more different edge accelerator devices hosted on a different edge location, and send, in response to a determination to perform the data migration, a request to the one or more accelerator devices on the present edge location for transformed workload data to be processed by the one or more different edge accelerator devices.

    Performing search and replace operations in a memory device using command parameters and a storage controller without transferring data to a processor

    公开(公告)号:US10261688B2

    公开(公告)日:2019-04-16

    申请号:US15089503

    申请日:2016-04-02

    Abstract: An apparatus and method for performing search and replace operations at a storage controller of a storage device are disclosed. The storage controller can receive a search command with one or more parameters that instructs the storage controller to search for a data pattern in data stored in a memory of the apparatus. The storage controller can locally search the data in the memory for the data pattern according to the parameters without transferring the data to a processor to perform the search. The parameters can include, but are not limited to, the data pattern or template to be searched, a data pattern length, a bit-mask, a logical block address (LBA) range, a byte offset, and an alignment parameter. Verdict bits can be provided to indicate data chunks in the memory that match the data pattern. Flags may define potential outputs to provide after searching, such as location and number of matches. A replace command with a set of parameters, including a write mask, can instruct the storage controller to replace the data pattern with a replacement or substitute pattern.

    MEMORY CONTROLLER WITH PRE-LOADER
    57.
    发明申请

    公开(公告)号:US20190042437A1

    公开(公告)日:2019-02-07

    申请号:US16123818

    申请日:2018-09-06

    Abstract: Embodiments of the present disclosure relate to a controller that includes a monitor to determine an access pattern for a range of memory of a first computer memory device, and a pre-loader to pre-load a second computer memory device with a copy of a subset of the range of memory based at least in part on the access pattern, wherein the subset includes a plurality of cache lines. In some embodiments, the controller includes a specifier and the monitor determines the access pattern based at least in part on one or more configuration elements in the specifier. Other embodiments may be described and/or claimed.

    SELECTIVE EXECUTION OF CACHE LINE FLUSH OPERATIONS

    公开(公告)号:US20190042417A1

    公开(公告)日:2019-02-07

    申请号:US16023717

    申请日:2018-06-29

    Abstract: The present disclosure is directed to systems and methods that include cache operation storage circuitry that selectively enables/disables the Cache Line Flush (CLFLUSH) operation. The cache operation storage circuitry may also selectively replace the CLFLUSH operation with one or more replacement operations that provide similar functionality but beneficially and advantageously prevent an attacker from placing processor cache circuitry in a known state during a timing-based, side channel attack such as Spectre or Meltdown. The cache operation storage circuitry includes model specific registers (MSRs) that contain information used to determine whether to enable/disable CLFLUSH functionality. The cache operation storage circuitry may include model specific registers (MSRs) that contain information used to select appropriate replacement operations such as Cache Line Demote (CLDEMOTE) and/or Cache Line Write Back (CLWB) to selectively replace CLFLUSH operations.

    SELECTIVE ACCESS TO PARTITIONED BRANCH TRANSFER BUFFER (BTB) CONTENT

    公开(公告)号:US20190042263A1

    公开(公告)日:2019-02-07

    申请号:US16023201

    申请日:2018-06-29

    Abstract: The present disclosure is directed to systems and methods for mitigating or eliminating the effectiveness of a side channel attack, such as a Spectre type attack, by limiting the ability of a user-level branch prediction inquiry to access system-level branch prediction data. The branch prediction data stored in the BTB may be apportioned into a plurality of BTB data portions. BTB control circuitry identifies the initiator of a received branch prediction inquiry. Based on the identity of the branch prediction inquiry initiator, the BTB control circuitry causes BTB look-up circuitry to selectively search one or more of the plurality of BTB data portions.

    TECHNOLOGIES FOR PROVIDING STREAMLINED PROVISIONING OF ACCELERATED FUNCTIONS IN A DISAGGREGATED ARCHITECTURE

    公开(公告)号:US20190042234A1

    公开(公告)日:2019-02-07

    申请号:US15912733

    申请日:2018-03-06

    Abstract: Technologies for providing streamlined provisioning of accelerated functions in a disaggregated architecture include a compute sled. The compute sled includes a network interface controller and circuitry to determine whether to accelerate a function of a workload executed by the compute sled, and send, to a memory sled and in response to a determination to accelerate the function, a data set on which the function is to operate. The circuitry is also to receive, from the memory sled, a service identifier indicative of a memory location independent handle for data associated with the function, send, to a compute device, a request to schedule acceleration of the function on the data set, receive a notification of completion of the acceleration of the function, and obtain, in response to receipt of the notification and using the service identifier, a resultant data set from the memory sled. The resultant data set was produced by an accelerator device during acceleration of the function on the data set. Other embodiments are also described and claimed.

Patent Agency Ranking