Abstract:
A digital line delay architecture is provided that requires a minimum of chip space, has low power requirements, is variable or programmable in length, and is flexible to permit changes in aspect ratio. The digital line delay architecture is self-multiplexing and therefore requires no external addressing for the multiplexing function, and is particularly suited for use as a video line delay in a single chip digital image processing device. In particular, a pointer unit (10) is employed to sequentially address a plurality of word storage locations provided in a storage unit (12). The pointer unit (10) includes a number of shift-registers (18) that sequentially shift a logic '1' along the length of the pointer unit to accomplish the addressing.
Abstract:
A data transmission system, in which data streams shall be transmitted with great speed between a sending clock domain and a receiving clock domain, which operate with mutually different clock speeds, includes two system part circuits (202, 206). One (202) of these system part circuits is designed to receive from the first clock domain a data stream (d1) with the clock speed of the first clock domain and, controlled by this clock speed (c11), serial/parallel convert the data stream to parallel data streams each with a clock speed being a certain fraction of the clock speed of the first clock domain. The other system part circuit (206) is designed to receive the parallel data streams (du, d1) and controlled by the clock speed (c12) of the other clock domain parallel/serial converting them to an output data stream (d2), which with the clock speed of the second clock domain is sent to the second clock domain.
Abstract:
Binary data is transmitted to a network physical layer from a media access controller (10) as a series of multibit nibbles and is encoded into a multi-level data stream (178) and split among a number of transmission channels (12). The multi-level signal is then translated at a receiver back into a binary data stream. In a specific embodiment, the symbol transmission frequency on each of the transmission channels (12) is at the same frequency as the nibbles transfer rate between the media access controller (10) and the physical layer.
Abstract:
A system and method for storing data in a linked list memory architecture maintains several key list parameters. When data to be stored is received, a memory manager determines the list in which the data belongs and retrieves several of the parameters. The parameters retrieved indicate the address of the current location at which the received data is to be stored and the address of the next location that is to be linked to the current list. The memory manager writes the data to the current location pointed to by the first address and writes the second address into a pointer field in that current location. Because the address of the next location in the list is determined before data is written to the current location, this next address can be written in the same cycle in which the data is written.
Abstract:
The invention relates mainly to a device and a process for writing in a stack-type memory device. The invention concerns the use of stacks (1) of the first in, first out (FIFO) type to unscramble television images. In order to write into such a stack (1) from a desired address, irrelevant information is first written to increment the internal counter of the stack to which there is no access. Relevant information is then written from the desired counter reading. It is possible to rewrite relevant information over the irrelevant data, for instance at the start of the stack. This invention is applicable in particular to special memories for uses not designed by the manufacturer. It is more especially applicable to the use of stacks of the first in, first out type to unscramble television images.
Abstract:
The present invention is directed to a display FIFO module for use in DRAM interface that includes a DRAM controller sequencer which prioritizes requests for DRAM access received from various modules, such as a CPU, a blit engine module, and a half frame buffer logic module, etc. The display FIFO module is connected between the DRAM controller sequencer and a display pipeline which is connected to a display device. The display FIFO module issues low and high priority requests for DRAM access to the DRAM controller sequencer for loading the FIFO with display data to be transferred to the display device. The low priority request is issued at the earliest time when the display FIFO is capable of accepting new data without overwriting unread data. This is determined by comparing the FIFO data level against a predetermined low threshold value. the low priority request is issued when the FIFO data level falls below or is equal to the low threshold value. A high priority request is issued when the FIFO must receive new data or FIFO underrun will occur. This is determined by comparing the FIFO data level against a predetermined high threshold value. The high priority request is issued when the FIFO data level falls below or is equal to the high threshold value. After a predetermined number of addresses have been latched by the DRAM controller sequencer to the DRAM for transferring data to the FIFO because of either the low or high priority request, or both, the display FIFO module reevaluates the FIFO data level to determine whether the FIFO data level is still below or is equal to either the low or high threshold value. If the FIFO data level is still below or equal to the low threshold value, the low priority request remains active; otherwise, the low priority request will be removed by the display FIFO module. Similarly, if the FIFO data level is still below or equal to the high threshold value, the high priority request remains active; otherwise, the high priority request will be removed by the display FIFO module. The low and high priority requests are issued independently of each other.
Abstract:
A large FIFO memory device has its total available memory capacity partitioned into memory sections. The partitions are in the form of programmable delimiters in order to determine flexibly the size of the memory sections.
Abstract:
The invention relates to an arrangement and a method respectively for handling or getting access to a digital buffer in a digital buffer memory (JBUM) wherein to each digital buffer a set of pointers is arranged in a reference memory (REFM). The arrangement comprises a register arrangement (JBSR, JBER) defining the position of a digital buffer in the digital buffer memory (JBUM), an offset value, an address calculation arrangement and an operating address register (JBAR). For each of the pointers in a set relating to a digital buffer, a separate pointer register (JBSR, JBER, JBIR, JBOR) is provided and address data is input and stored substantially at the same time in each pointer register corresponding to a set of pointers. The subsequent address for reading/writting in the digital buffer memory (JBUM) is calculated and stored in at least the operating address register (JBAR).
Abstract:
A buffer memory architecture, method, and chip floor plan allows for significant reduction in the physical area required for a buffer memory of any given size in a microelectronic device. Buffer applications wherein random access to the buffered data is not required use a CMOS dynamic serial memory with p-channel devices supplied with a voltage less positive than the voltage supplied to their respective n-wells. In a particular embodiment, three memory stages are used in a cascaded fashion. The first and third memory stages store data on a parallel basis, while the second memory stage stores data on a serial basis. The second memory stage can be fabricated using much less chip area per bit than the first and third memory stages. Significant area reduction is achieved because the second memory stage eliminates addressing overhead associated with conventional high-density memory schemes, and low voltage power supplies permit relaxation of latch-up prevention layout rules.
Abstract:
A single port First-In-First-Out (FIFO) data storage device that include an over-write protection feature and diagnostic capabilities. The FIFO contemplated by the invention is fabricated using a field programmable gate array; yet is as robust (feature rib) and can be used as safely as more elaborate, hardware-consuming FIFO devices, such as a traditional "dual port" FIFO. More particularly, a FIFO single port storage device (and corresponding methods for operating same) is set forth which includes (a) a write protection feature to ensure that the FIFO contents are not disturbed once the FIFO becomes full; (b) a first diagnostic feature to provide host software with an indication that the protection feature is in-force; and (c) a second diagnostic feature which provides host software with an indication that it (the software) may have errantly attempted to disturb a full FIFO before securely and completely emptying the FIFO's contents.