Offload of acknowledgements to a network device

    公开(公告)号:US12218843B2

    公开(公告)日:2025-02-04

    申请号:US18405746

    申请日:2024-01-05

    Abstract: Examples described herein relate to a network device apparatus that includes a network interface card to process a received packet. In some examples, based on the received packet only including one or more frames for which acknowledgement of receipt is offloaded to the network interface card, generate an acknowledgement (ACK) message to acknowledge receipt of the received packet. In some examples, a frame for which acknowledgement of receipt is offloaded to the network interface card comprises a STREAM frame compatible with quick User Datagram Protocol (UDP) Internet Connections (QUIC). In some examples, a computing platform is coupled to the network interface card. In some examples, based on the received packet only including any frame for which acknowledgement of receipt is not offloaded to the network interface, the computing platform is to generate an ACK message for the received packet.

    Hierarchical reinforcement learning algorithm for NFV server power management

    公开(公告)号:US12001932B2

    公开(公告)日:2024-06-04

    申请号:US16939237

    申请日:2020-07-27

    CPC classification number: G06N3/006 G06F1/3287 G06N5/04 G06N20/00

    Abstract: Methods and apparatus for hierarchical reinforcement learning (RL) algorithm for network function virtualization (NFV) server power management. A first RL model at a first layer is trained by adjusting a frequency of the core of processor while performing a workload to obtain a first trained RL model. The trained RL model is operated in an inference mode while training a second RL model at a second level in the RL hierarchy by adjusting a frequency of the core and a frequency of processor circuitry external to the core to obtain a second trained RL model. Training may be performed online or offline. The first and second RL models are operated in inference modes during online operations to adjust the frequency of the core and the frequency of the circuitry external to the core while executing software on the plurality of cores of to perform a workload, such as an NFV workload.

    MEMORY RING-BASED JOB DISTRIBUTION FOR PROCESSOR CORES AND CO-PROCESSORS

    公开(公告)号:US20180285154A1

    公开(公告)日:2018-10-04

    申请号:US15473885

    申请日:2017-03-30

    Abstract: An apparatus includes a processor, a co-processor and a memory ring. The memory ring includes a plurality of slots that are associated with a plurality of jobs. The processor is to apply a set of rules and based on the application of the set of rules, selectively access a first slot of the plurality of slots to read first data stored in the first slot representing a first job of the plurality of jobs and process the first job based on the first data. The co-processor is to apply the set of rules and based on the application of the set of rules, access a second slot of the plurality of slots other than the first slot to read second data representing a second job of the plurality of jobs and process the second job based on the second data.

Patent Agency Ranking