Abstract:
Methods, apparatuses, systems, and storage media for creating, discovering, and/or resolving spells using a wand are provided. In embodiments, a computing device or a wand may detect one or more gestures and sensors in the wand may generate sensor data representative of the one or more gestures. The one or more gestures may be movements performed using the wand. The sensor data representative of the one or more gestures may be converted into a spell sequence. The wand may transmit the spell sequence to a computing device, and receive, from the computing device, an authentication spell output when the spell sequence corresponds with an authentication spell sequence or an inactivation spell output when the spell sequence does not correspond with the authentication spell sequence. Other embodiments may be described and/or claimed.
Abstract:
A heterogeneous processor architecture is described. For example, a processor according to one embodiment of the invention comprises: a set of large physical processor cores; a set of small physical processor cores having relatively lower performance processing capabilities and relatively lower power usage relative to the large physical processor cores; virtual-to-physical (V-P) mapping logic to expose the set of large physical processor cores to software through a corresponding set of virtual cores and to hide the set of small physical processor core from the software.
Abstract:
A battery includes integrated circuitry. The battery may include, for example, a substrate with a battery cell including an anode and a cathode. One or more electrical devices may be integrated on or within the substrate and configured to receive power from the anode and cathode. A package containing the substrate and the one or more electrical devices may include a first battery terminal electrically coupled to the anode and a second battery terminal electrically coupled to the cathode. The one or more electrical devices may include sensing circuitry to generate sensor data, and communication circuitry to provide the sensor data external to the package.
Abstract:
Systems and methods for cache allocation with code and data prioritization. An example system may comprise: a cache; a processing core, operatively coupled to the cache; and a cache control logic, responsive to receiving a cache fill request comprising an identifier of a request type and an identifier of a class of service, to identify a subset of the cache corresponding to a capacity bit mask associated with the request type and the class of service.
Abstract:
In one embodiment, the present invention includes a method for receiving an interrupt from an accelerator, sending a resume signal directly to a small core responsive to the interrupt and providing a subset of an execution state of the large core to the first small core, and determining whether the small core can handle a request associated with the interrupt, and performing an operation corresponding to the request in the small core if the determination is in the affirmative, and otherwise providing the large core execution state and the resume signal to the large core. Other embodiments are described and claimed.
Abstract:
Systems and methods may provide for capturing a user input by emulating a touch screen mechanism. In one example, the method may include identifying a point of interest on a front facing display of the device based on gaze information associated with a user of the device, identifying a hand action based on gesture information associated with the user of the device, and initiating a device action with respect to the front facing display based on the point of interest and the hand action.
Abstract:
Systems and methods may provide for determining whether a memory access request is error-tolerant, and routing the memory access request to a reliable memory region if the memory access request is error-tolerant. Moreover, the memory access request may be routed to an unreliable memory region if the memory access request is error-tolerant. In one example, use of the unreliable memory region enables a reduction in the minimum operating voltage level for a die containing the reliable and unreliable memory regions.
Abstract:
An apparatus comprises a plurality of cores and a controller coupled to the cores. The controller is to lower an operating point of a first core if a first number based on processor clock cycles per instruction (CPI) associated with a second core is higher than a first threshold. The controller is operable to increase the operating point of the first core if the first number is lower than a second threshold.
Abstract:
An apparatus to facilitate inferred object shading is disclosed. The apparatus comprises one or more processors to receive rasterized pixel data and hierarchical data associated with one or more objects and perform an inferred shading operation on the rasterized pixel data, including using one or more trained neural networks to perform texture and lighting on the rasterized pixel data to generate a pixel output, wherein the one or more trained neural networks uses the hierarchical data to learn a three-dimensional (3D) geometry, latent space and representation of the one or more objects.
Abstract:
An apparatus and method for closed loop dynamic resource allocation. For example, one embodiment of a method comprises: collecting data related to usage of a plurality of resources by a plurality of workloads over one or more time periods, the workloads including priority workloads associated with one or more guaranteed performance levels and best effort workloads not associated with guaranteed performance levels; analyzing the data to identify resource reallocations from one or more of the priority workloads to one or more of the best effort workloads in one or more subsequent time periods while still maintaining the guaranteed performance levels; reallocating the resources from the priority workloads to the best effort workloads for the subsequent time periods; monitoring execution of the priority workloads with respect to the guaranteed performance level during the subsequent time periods; and preemptively reallocating resources from the best effort workloads to the priority workloads during the subsequent time periods to ensure compliance with the guaranteed performance level and responsive to detecting that the guaranteed performance level is in danger of being breached.