Abstract:
A method for texture compressing images having a plurality of color components (R, G, B), includes the step of decomposing the images in sub-blocks each including only one color component. At least one first predictor is defined for each said sub-block and a respective set of prediction differences is computed for each sub-block. Then the prediction differences for each sub block are sorted, and a look-up prediction differences palette is set up by defining therefrom a look-up prediction error palette. A code is associated with each column of the error palette.
Abstract:
Binary words are converted between a non-encoded format (OP) and a compressed encoded format (V), in which the binary words are, at least in part, represented by encoded bit sequences that are shorter than the respective binary word in the non-encoded format. The shortest encoded bit sequences are selected according to the statistical recurrence of the respective words in the non-encoded format, and associated to the binary words with higher recurrence are encoded bit sequences comprising bit numbers that are accordingly smaller. The correspondence between binary words in non-encoded format and the encoded bit sequences associated to them is established by means of indices of an encoding vocabulary. The conversion process comprises the operations of:
arranging the indices according to an ordered sequence; organizing the sequence of indices into groups of vectors (GV); splitting each group of vectors into a given number of vectors (V); and encoding the vectors (V) independently from one another.
Alternatively, for each group of vectors, at the end of the encoding process, calculation is carried out - the result being saved in a table, referred to as address-translation table (ATT) - of the starting address on 32 bits of the compressed block or of the differences, expressed in bytes, with respect to the last complete address appearing in said table (ATT).
Abstract:
A graphic system comprising a pipelined tridimensional graphic engine for generating image frames for a display inlcudes a graphic engine (110;210) comprising at least one geometric processing elaboration stages (111, 112), performing motion motion extraction. The engine also includes a rendering stage (113) generating full image frames (KF) at a first frame rate (F2) to be displayed at a second frame rate (F1), higher than the first frame rate (F2). The pipelined graphic engine further comprises a motion encoder (214) receiving motion vector information (MB) and suitable for coding the motion information e.g. with a variable length code, while generating a signal (R4) representative of interpolated frames (IF). The motion encoder (214) exploits the motion information (MB) as generated by the geometric elaboration stages (211, 212). A motion compensation stage (237) is provided fed with the signal representative of interpolated frames (IF) and full image frames for generating said the interpolated frames (IF). A preferred application is in graphic engines intended to operate in association with smart displays through a wireless connection, i.e. in mobile phones.
Abstract:
A geometric processing stage (111b) for a pipelined engine for processing video signals and generating processed video signal in space coordinates (S) adapted for display on a screen. The geometric processing stage (111b) includes:
a model view module (201) for generating projection coordinates of primitives of the video signals in a view space, said primitives including visible and non-visible primitives, a back face culling module (309) arranged downstream of the model view module (201) for at least partially eliminating the non visible primitives, a projection transform module (204) for transforming the coordinates of the video signals from view space coordinates into normalized projection coordinates (P), and a perspective divide module (208) for transforming the coordinates of the video signals from normalized projection (P) coordinates into screen space coordinates (S).
The back face culling module (309) is arranged downstream the projection transform module (204) and operates on normalized projection (P) coordinates of said primitives. The perspective divide module (208) is arranged downstream the back face culling module (309) for transforming the coordinates of the video signals from normalized projection (P) coordinates into screen space coordinates (S). A circuit (10) in the back face culling module can be shared with a standard three dimension back face culling operation when necessary. A preferred application is in graphic engines using standard graphics language like OpenGL and NokiaGL.
Abstract:
The program to be executed is compiled by translating it into native instructions of the instruction-set architecture (ISA) of the processor system (SILC 1, SILC 2), organizing the instructions deriving from the translation of the program into respective bundles in an order of successive bundles, each bundle grouping together instructions adapted to be executed in parallel by the processor system. The bundles of instructions are ordered into respective sub-bundles, said sub-bundles identifying a first set of instructions ("must" instructions), which must be executed before the instructions belonging to the next bundle of said order, and a second set of instructions ("can" instructions), which can be executed both before and in parallel with respect to the instructions belonging to said subsequent bundle of said order. There is defined a sequence of execution of the instructions in successive operating cycles of the processor system (SILC 1, SILC 2), assigning each sub-bundle to an operating cycle, thus preventing simultaneous assignment to the same operating cycle of two sub-bundles belonging to the first set ("must" set) of two successive bundles. The instructions of the sequence may be executed by the various processors of the system (SILC 1, SILC 2) in conditions of binary compatibility.
Abstract:
Digital signals are converted between a first (IS) and second (OS) format by a conversion process including the step of generating coefficients (X n representing such digital signals. Such coefficients may be e.g. Discrete Cosine Transform (DCT) coefficient generated during encoding/transcoding of MPEG signal. The coefficients are subject to quantization (q) by generating a dither signal (W n ) that is added to the coefficients (X n ) before quantization (q) to generate a quantized signal. Preferably, each coefficient (X n ) is first subject to a first quantization step (q1) in the absence of any dither signal (W n ) added to generate an undithered quantized coefficient. If the undithered quantized signal is equal to zero the undithered quantized coefficient is taken as the output quantized signal. If the undithered quantized coefficient is different from zero, the dither signal (W n ) is added and the dithered coefficient thus obtained is subject to a quantization step (q2) to generate the output quantized signal.