Abstract:
An RGB digital video signal destined to be displayed on a display such as a liquid crystal display (LCD) is converted from the RGB colour space to the YUV colour space. The signal converted into the YUV colour space is subjected to at least a processing operation selected among a sub-sampling operation (24) and a data compression operation (26). The signal is then stored in a memory and the signal read from said memory (12) is then subjected to at least a return operation (28, 30) complementary to the aforesaid processing operation (24, 26). The signal subjected to the aforesaid return operation is lastly reconverted from the YUV colour space to the RGB colour space, thus being susceptible to be displayed on the display.
Abstract:
A video sequence (SIN) is encoded by: subsampling the video sequence to produce: a number N of multiple descriptions of the video sequence, each multiple description including 1/N samples of the video sequence, and a subsampled version of the sequence, the subsampled version having a resolution lower or equal to the resolution of the N multiple descriptions. The N multiple descriptions and the subsampled version are subjected to scalable video coding (SVC) to produce an SVC encoded signal having a base layer (BL) and N enhancement layers predicted from said base layer (BL). The subsampled version of the sequence and the N multiple descriptions of the video sequence constitute the base layer and the enhancement layers, respectively, of the SVC encoded signal.
Abstract:
An RGB digital video signal destined to be displayed on a display such as a liquid crystal display (LCD) is converted from the RGB colour space to the YUV colour space. The signal converted into the YUV colour space is subjected to at least a processing operation selected among a sub-sampling operation (24) and a data compression operation (26). The signal is then stored in a memory and the signal read from said memory (12) is then subjected to at least a return operation (28, 30) complementary to the aforesaid processing operation (24, 26). The signal subjected to the aforesaid return operation is lastly reconverted from the YUV colour space to the RGB colour space, thus being susceptible to be displayed on the display.
Abstract:
A method for texture compressing images having a plurality of color components (R, G, B), includes the step of decomposing the images in sub-blocks each including only one color component. At least one first predictor is defined for each said sub-block and a respective set of prediction differences is computed for each sub-block. Then the prediction differences for each sub block are sorted, and a look-up prediction differences palette is set up by defining therefrom a look-up prediction error palette. A code is associated with each column of the error palette.
Abstract:
The architecture (10), which is adapted to be implemented in the form of a reusable IP cell, preferably comprises:
a motion estimation engine (16), configured to process a cost function (SAD, MAD, MSE) and identify a motion vector (MV) which minimizes this, an internal memory (17) configured to store the sets of initial candidate vectors for the blocks of a reference frame; first (18) and second (19) controllers to manage the motion vectors and manage an external frame memory (13); a reference synchronizer (20) to align, at the input to the estimation engine (16), the data relevant to the reference blocks with the data relevant to candidate blocks coming from the second controller (19), and a control unit (21) for timing the units (16 to 20) included in the architecture (10) and the external interfacing of the architecture itself.
Preferred application to codec units operating according to standard MPEG/H.263.
Abstract:
To carry out de-interlacing of digital images there is provided a spatial-type de-interlacing process to be applied to a digital image (FRM) for obtaining a spatial reconstruction (Tsp). Furthermore, to said digital image (FRM) there are also applied one or more temporal-type de-interlacing processes for obtaining one or more temporal reconstructions (Tub, Tuf, Tbb and Tbn), and said spatial reconstruction (Tsp) and said one or more temporal reconstructions (Tub, Tuf, Tbb and Tbn) are sent to a decision module (D). Said decision module applies a cost function (var) to said spatial reconstruction (Tsp) and said temporal reconstructions (Tub, Tuf, Tbb and Tbn) and chooses from among said spatial reconstruction (Tsp) and said temporal reconstructions (Tub, Tuf, Tbb and Tbn) the one that minimizes said cost function (var). Preferential application is to display systems, in particular displays of a cathode-ray type, liquid-crystal type, and plasma type which use a mechanism of progressive scan.
Abstract:
In order to generate, starting from an input MPEG bitstream (IS), an output MPEG bitstream (OS) having at least one entity chosen among syntax, resolution, and bitrate modified with respect to the input bitstream (IS), first portions and second portions are distinguished in the input bitstream (IS), which respectively substantially do not affect and do affect the variation in bitrate. When at least one between the syntax and the resolution is to be modified, the first portions of the input bitstream (IS) are subjected (104) to the required translation, then transferring (134) said first portions subjected to syntax and/or resolution translation to the output bitstream (OS). When the resolution is left unaltered, the second portions are transferred (138) from the input bitstream (IS) to the output bitstream (OS) in the substantial absence of processing operations. When the resolution is changed, the second portions of the input bitstream (IS) are subjected (108 to 130) to a filtering in the domain of the discrete cosine transform (DCT).