Abstract:
In one embodiment, a method is provided. The method of this embodiment provides storing a packet header at a set of at least one page of memory allocated to storing packet headers, and storing the packet header and a packet payload at a location not in the set of at least one page of memory allocated to storing packet headers.
Abstract:
A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.
Abstract:
In one embodiment, a method is provided. The method of this embodiment provides storing a packet header at a set of at least one page of memory allocated to storing packet headers, and storing the packet header and a packet payload at a location not in the set of at least one page of memory allocated to storing packet headers.
Abstract:
Generally, this disclosure relates to adaptive interrupt moderation. A method may include determining, by a host device, a number of connections between the host device and one or more link partners based, at least in part, on a connection identifier associated with each connection; determining, by the host device, a new interrupt rate based at least in part on a number of connections; updating, by the host device, an interrupt moderation timer with a value related to the new interrupt rate; and configuring the interrupt moderation timer to allow interrupts to occur at the new interrupt rate.
Abstract:
A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.
Abstract:
There is disclosed in one example a network interface card (NIC), comprising: an ingress interface to receive incoming traffic; a plurality of queues to queue incoming traffic; an egress interface to direct incoming traffic to a plurality of server applications; and a queuing engine, including logic to: uniquely associate a queue with a selected server application; receive an incoming network packet; determine that the selected server application may process the incoming network packet; and assign the incoming network packet to the queue.
Abstract:
A device is provided with two or more uplink ports to connect the device via two or more links to one or more sockets, where each of the sockets includes one or more processing cores, and each of the two or more links is compliant with a particular interconnect protocol. The device further includes I/O logic to identify data to be sent to the one or more processing cores for processing, determine an affinity attribute associated with the data, and determine which of the two or more links to use to send the data to the one or more processing cores based on the affinity attribute.
Abstract:
Method and system for performing data movement operations is described herein. One embodiment of a method includes: storing data for a first memory address in a cache line of a memory of a first processing unit, the cache line associated with a coherency state indicating that the memory has sole ownership of the cache line; decoding an instruction for execution by a second processing unit, the instruction comprising a source data operand specifying the first memory address and a destination operand specifying a memory location in the second processing unit; and responsive to executing the decoded instruction, copying data from the cache line of the memory of the first processing unit as identified by the first memory address, to the memory location of the second processing unit, wherein responsive to the copy, the cache line is to remain in the memory and the coherency state is to remain unchanged.
Abstract:
This disclosure describes enhancements to Ethernet for use in higher performance applications like Storage, HPC, and Ethernet based fabric interconnects. This disclosure provides various mechanisms for lossless fabric enhancements with error-detection and retransmissions to improve link reliability, frame pre-emption to allow higher priority traffic over lower priority traffic, virtual channel support for deadlock avoidance by enhancing Class of service functionality defined in IEEE 802.1Q, a new header format for efficient forwarding/routing in the fabric interconnect and header CRC for reliable cut-through forwarding in the fabric interconnect. The enhancements described herein, when added to standard and/or proprietary Ethernet protocols, broadens the applicability of Ethernet to newer usage models and fabric interconnects that are currently served by alternate fabric technologies like Infiniband, Fibre Channel and/or other proprietary technologies, etc.
Abstract:
Generally, this disclosure provides devices, methods and computer readable media for packet processing with reduced latency. The device may include a data queue to store data descriptors associated with data packets, the data packets to be transferred between a network and a driver circuit. The device may also include an interrupt generation circuit to generate an interrupt to the driver circuit. The interrupt may be generated in response to a combination of an expiration of a delay timer and a non-empty condition of the data queue. The device may further include an interrupt delay register to enable the driver circuit to reset the delay timer, the reset postponing the interrupt generation.