GENERATING, AT LEAST IN PART, AND/OR RECEIVING, AT LEAST IN PART, AT LEAST ONE REQUEST

    公开(公告)号:US20190268280A1

    公开(公告)日:2019-08-29

    申请号:US16410442

    申请日:2019-05-13

    Abstract: In an embodiment, an apparatus is provided that may include circuitry to generate, at least in part, and/or receive, at least in part, at least one request that at least one network node generate, at least in part, information. The information may be to permit selection, at least in part, of (1) at least one power consumption state of the at least one network node, and (2) at least one time period. The at least one time period may be to elapse, after receipt by at least one other network node of at least one packet, prior to requesting at least one change in the at least one power consumption state. The at least one packet may be to be transmitted to the at least one network node. Of course, many alternatives, modifications, and variations are possible without departing from this embodiment.

    Technologies for management of lookup tables

    公开(公告)号:US10394784B2

    公开(公告)日:2019-08-27

    申请号:US15389218

    申请日:2016-12-22

    Abstract: Technologies for managing lookup tables are described. The lookup tables may be used for a two-level lookup scheme for packet processing. When the tables need to be updated with a new key for packet processing, information about the new key may be added to a first-level lookup table and a second-level lookup table. The first-level lookup table may be used to identify a handling node for an obtained packet, and the handling node may perform a second-level table lookup to obtain information for further packet processing. The first lookup table may be replicated on all the nodes in a cluster, and the second-level lookup table may be unique to each node in the cluster. Other embodiments are described herein and claimed.

    Techniques for routing packets among virtual machines

    公开(公告)号:US10356012B2

    公开(公告)日:2019-07-16

    申请号:US14830856

    申请日:2015-08-20

    Abstract: Various embodiments are generally directed to techniques for improving the efficiency of exchanging packets among multiple VMs within a communications server, and between the communications server and other devices in a communications system. An apparatus may include a virtual switch to analyze contents of at least one packet of a set of packets to be exchanged between endpoint devices through a network, and to correlate the contents to a pathway to extend through one or more of the VMs that are each configured as virtual servers of multiple virtual servers; and an interface control component to select at least one virtual network interface of each of the one or more virtual servers along the pathway to operate in a polling mode, and to select a virtual network interface of at least one virtual server of the multiple virtual servers not along the pathway to operate in a non-polling mode.

    Efficient QoS support for software packet processing on general purpose servers

    公开(公告)号:US10237171B2

    公开(公告)日:2019-03-19

    申请号:US15270377

    申请日:2016-09-20

    Abstract: Methods and apparatus for facilitating efficient Quality of Service (QoS) support for software-based packet processing by offloading QoS rate-limiting to NIC hardware. Software-based packet processing is performed on packet flows received at a compute platform, such as a general purpose server, and/or packet flows generated by local applications running on the compute platform. The packet processing includes packet classification that associates packets with packet flows using flow IDs, and identifying a QoS class for the packet and packet flow. NIC Tx queues are dynamically configured or pre-configured to effect rate limiting for forwarding packets enqueued in the NIC Tx queues. New packet flows are detected, and mapping data is created to map flow IDs associated with flows to the NIC Tx queues used to forward the packets associated with the flows.

    TECHNOLOGIES FOR MANAGING SINGLE-PRODUCER AND SINGLE CONSUMER RINGS

    公开(公告)号:US20190044871A1

    公开(公告)日:2019-02-07

    申请号:US16144384

    申请日:2018-09-27

    Abstract: Technologies for managing a single-producer and single-consumer ring include a producer of a compute node that is configured to allocate data buffers, produce work, and indicate that work has been produced. The compute node is configured to insert reference information for each of the allocated data buffers into respective elements of the ring and store the produced work into the data buffers. The compute node includes a consumer configured to request the produced work from the ring. The compute node is further configured to dequeue the reference information from each of the elements of the ring that correspond to the portion of data buffers in which the produced work has been stored, and set each of the elements of the ring for which the reference information has been dequeued to an empty (i.e., NULL) value. Other embodiments are described herein.

    [ICE] ARCHITECTURE AND MECHANISMS TO ACCELERATE TUPLE-SPACE SEARCH WITH INTERGRATED GPU

    公开(公告)号:US20190042304A1

    公开(公告)日:2019-02-07

    申请号:US15829938

    申请日:2017-12-03

    Abstract: Methods, apparatus, systems, and software for architectures and mechanisms to accelerate tuple-space search with integrated GPUs (Graphic Processor Units). One of the architectures employs GPU-side lookup table sorting, under which local and global hit count histograms are maintained for work groups, and sub-tables containing rules for tuple matching are re-sorted based on the relative hit rates of the different sub-tables. Under a second architecture, two levels of parallelism are implemented: packet-level parallelism and lookup table-parallelism. Under a third architecture, dynamic two-level parallel processing with pre-screen is implemented. Adaptive decision making mechanisms are also disclosed to select which architecture is optimal in view of multiple considerations, including application preferences, offered throughput, and available GPU resources. The architectures leverage utilization of both processor cores and GPU processing elements to accelerate tuple-space searches, including searches using wildcard masks.

Patent Agency Ranking