Abstract:
Method and apparatus suited for use with a decoder receiving data serially from a network is disclosed providing synchronization throughout reception and decoding of packets of symbols. An appropriately-delayed read pointer initialization strobe used by an elastic buffer portion of the receiver provides the sequence of synchronization signals which avoids deletion of bits of packet preamble.
Abstract:
In accordance with the invention, the reference portion of a primitive current switch used in emitter coupled logic or current mode logic is modified by introducing a slow device as the reference element in order to enhance the speed of turn on and turn off of the input elements. In particular, the reference transistor of a conventional ECL inverter gate or conventional CML inverter gate is replaced with a slow transistor or slow diode in order to bypass the emitter dynamic resistance. The emitter time constant of the reference element Q.sub.R is thereby increased so that the voltage on the common current source node (node 3) does not change substantially when the base of the input elements change transiently. As a consequence, the collector output of the input element, such as transistor Q.sub.A is switched on or off significantly faster.
Abstract:
A method for collapsing operations into super operations in a computing system includes dispatching a super operation corresponding to a collapsible sequence of operations to a scheduler, performing a lookup in a super operation table for the collapsible sequence of operations in response to the super operation being picked from the scheduler, and multi-pumping the collapsible sequence of operations to a pipe operationally coupled to the scheduler. For example, the multi-pumped collapsible sequence of operations may then be sequentially executed by an execution unit. The collapsible sequence of operations may be identified as collapsible according to a set of rules.
Abstract:
A processing system includes one or more DMA engines that load data from memory or another cache location without storing the data after loading it. As the data propagates past caches located between the memory or other cache location that stores the requested data (“intermediate caches”), the data is selectively copied to the intermediate caches based on a cache replacement policy. Rather than the DMA engine manually storing the data into the intermediate caches, the cache replacement policies of the intermediate caches determine whether the data is copied into each respective cache and a replacement priority of the data. By bypassing storing the data, the DMA engine effectuates prefetching to the intermediate caches without expending unnecessary bandwidth or searching for a memory location to store the data, thus reducing latency and saving energy.
Abstract:
A multipurpose wordline underdrive circuit includes a wordline driver and a pulldown network. The pulldown network includes a first current-carrying terminal electrically coupled to the wordline driver and a second current-carrying terminal electrically coupled to a control signal. The pulldown network also includes a current-regulation terminal electrically coupled to an additional control signal. Various other devices, systems, and methods are also disclosed.
Abstract:
A memory accessing circuit includes a memory controller for scheduling accesses to a memory, and a physical interface circuit for driving signals to the memory according to scheduled accesses and having configuration data. The memory controller comprises a memory and is responsive to a low power mode entry signal to save the configuration data in the memory. The physical interface circuit removes operating power from circuitry in the physical interface circuit that stores the configuration data in response to the memory controller completing a save operation.
Abstract:
Platform power management includes boosting performance in a platform power boost mode or restricting performance to keep a power or temperature under a desired threshold in a platform power cap mode. Platform power management exploits the mutually exclusive nature of activities and the associated headroom created in a temperature and/or power budget of a server platform to boost performance of a particular component while also keeping temperature and/or power below a threshold or budget.
Abstract:
Disclosed herein are chip packages and electronic devices that utilized an active silicon bridge having a memory controller to interface between a logic device having at least one compute die and one or more memory stacks within a singular chip package. In one example, a chip package is provided that includes a substrate, a logic device, a memory stack, and an active silicon bridge. The logic device is disposed over the substrate. The logic device includes one or more compute dies. The memory stack is disposed over the substrate adjacent the logic device. The active silicon bridge has a first portion and a second portion. The first portion is disposed between the substrate and the logic device, while the second portion is disposed between the substrate and the memory stack.
Abstract:
A processing system preconditions block-compressed texture blocks by separately streaming color components and index components for lossless compression. The processing system preconditions the color components for linear compression, such that color component data for adjacent compressed blocks in a row are further compressed using lossless compression. Lossless compression is performed for color components spanning multiple rows to leverage patterns in color components that extend vertically across a frame. The processing system further divides input color component and index component data into pages of memory (e.g., 64 kB pages), such that each page can be independently losslessly compressed and decompressed. The processing system applies delta encoding to color component data so that a single instance of color data and differences from the stored color data are stored for each page.
Abstract:
A method and apparatus of managing memory includes storing a first memory page at a shared memory location in response to the first memory page including data shared between a first virtual machine and a second virtual machine. A second memory page is stored at a memory location unique to the first virtual machine in response to the second memory page including data unique to the first virtual machine. The first memory page is accessed by the first virtual machine and the second virtual machine, and the second memory page is accessed by the first virtual machine and not the second virtual machine.