Abstract:
Systems and methods for managing shared cache by multi-core processor. An example processing system comprises: a plurality of processing cores, each processing core communicatively coupled to a last level cache (LLC) slice; and a cache control logic coupled to the plurality of processing cores, the cache control logic configured to perform one of: making an LLC slice of an inactive processing core available to an active processing core or power gating the LLC slice, based on estimating cache requirements by active processing cores.
Abstract:
Technologies for identifying a cache line of a network packet for eviction from an on-processor cache of a network device communicatively coupled to a network controller. The network device is configured to determine whether a cache line of the cache corresponding to the network packet is to be evicted from the cache based on a determination that the network packet is not needed subsequent to processing the network packet, and provide an indication that the cache line is to be evicted from the cache based on an eviction policy received from the network controller.
Abstract:
Methods and systems may provide for determining whether a runtime disablement condition is met with respect to a sleep state and disabling the sleep state if the runtime disablement condition is met. Additionally, the sleep state may be enabled if a runtime reinstatement condition is met. In one example, determining whether the runtime disablement condition is met includes determining a false entry rate for the sleep state, and comparing the false entry rate to an energy-based threshold, wherein the sleep state is disabled if the false entry rate exceeds the energy-based threshold.
Abstract:
Methods and systems may provide for determining a status of a mobile platform, wherein the status indicates whether the mobile platform is stationary, and adapting a detection schedule of one or more location sensors on the mobile platform based at least in part on whether the mobile platform is stationary. Additionally, one or more location updates may be generated based at least in part on information from the one or more location sensors. In one example, a location request is received, wherein the detection schedule is adapted further based on quality of service (QoS) information associated with the location request, and wherein the one or more location updates are generated in response to the location request.
Abstract:
In one embodiment, a processor includes: a plurality of cores each to independently execute instructions; a shared cache memory coupled to the plurality of cores and having a plurality of clusters each associated with one or more of the plurality of cores; a plurality of cache activity monitors each associated with one of the plurality of clusters, where each cache activity monitor is to monitor one or more performance metrics of the corresponding cluster and to output cache metric information; a plurality of thermal sensors each associated with one of the plurality of clusters and to output thermal information; and a logic coupled to the plurality of cores to receive the cache metric information from the plurality of cache activity monitors and the thermal information and to schedule one or more threads to a selected core based at least in part on the cache metric information and the thermal information for the cluster associated with the selected core. Other embodiments are described and claimed.
Abstract:
A network interface device (NID) may determine whether the received data units of the computer system are to be compressed before transmitting the data units. The NID may determine the compression energy value consumed to compress the first K1 data units and a second transmission energy value to transmit the compressed first K1 data units. The NID may then estimate a first transmission energy value that may be consumed by the NID to transmit uncompressed first K1 data units using the second transmission energy value. The NID may then use the first and second transmission energy value and the compression energy value to determine if the remaining (N−K1) data units of the first data stream.
Abstract:
A network interface device (NID) may determine whether the received data units of the computer system are to be compressed before transmitting the data units. The NID may determine the compression energy value consumed to compress the first K1 data units and a second transmission energy value to transmit the compressed first K1 data units. The NID may then estimate a first transmission energy value that may be consumed by the NID to transmit uncompressed first K1 data units using the second transmission energy value. The NID may then use the first and second transmission energy value and the compression energy value to determine if the remaining (N-K1) data units of the first data stream.
Abstract:
Technologies for dynamically managing a batch size of packets include a network device. The network device is to receive, into a queue, packets from a remote node to be processed by the network device, determine a throughput provided by the network device while the packets are processed, determine whether the determined throughput satisfies a predefined condition, and adjust a batch size of packets in response to a determination that the determined throughput satisfies a predefined condition. The batch size is indicative of a threshold number of queued packets required to be present in the queue before the queued packets in the queue can be processed by the network device.
Abstract:
Apparatus, methods, and systems for tuple space search-based flow classification using cuckoo hash tables and unmasked packet headers are described herein. A device can communicate with one or more hardware switches. The device can include memory to store hash table entries of a hash table. The device can include processing circuitry to perform a hash lookup in the hash table. The lookup can be based on an unmasked key include in a packet header corresponding to a received data packet. The processing circuitry can retrieve an index pointing to a sub-table, the sub-table including a set of rules for handling the data packet. Other embodiments are also described.
Abstract:
A central processing unit can offload table lookup or tree traversal to an offload engine. The offload engine can provide hardware accelerated operations such as instruction queueing, bit masking, hashing functions, data comparisons, a results queue, and a progress tracking. The offload engine can be associated with a last level cache. In the case of a hash table lookup, the offload engine can apply a hashing function to a key to generate a signature, apply a comparator to compare signatures against the generated signature, retrieve a key associated with the signature, and apply the comparator to compare the key against the retrieved key. Accordingly, a data pointer associated with the key can be provided in the result queue. Acceleration of operations in tree traversal and tuple search can also occur.