Abstract:
A geometric processing stage (111b) for a pipelined engine for processing video signals and generating processed video signal in space coordinates (S) adapted for display on a screen. The geometric processing stage (111b) includes:
a model view module (201) for generating projection coordinates of primitives of the video signals in a view space, said primitives including visible and non-visible primitives, a back face culling module (309) arranged downstream of the model view module (201) for at least partially eliminating the non visible primitives, a projection transform module (204) for transforming the coordinates of the video signals from view space coordinates into normalized projection coordinates (P), and a perspective divide module (208) for transforming the coordinates of the video signals from normalized projection (P) coordinates into screen space coordinates (S).
The back face culling module (309) is arranged downstream the projection transform module (204) and operates on normalized projection (P) coordinates of said primitives. The perspective divide module (208) is arranged downstream the back face culling module (309) for transforming the coordinates of the video signals from normalized projection (P) coordinates into screen space coordinates (S). A circuit (10) in the back face culling module can be shared with a standard three dimension back face culling operation when necessary. A preferred application is in graphic engines using standard graphics language like OpenGL and NokiaGL.
Abstract:
A system for rendering a primitive of an image to be displayed, for instance in a mobile 3D graphic pipeline, the primitive including a set of pixels. The system is configured for:
locating the pixels that fall within the area of the primitive, generating, for each pixel located in the area, a set of associated sub-pixels, borrowing a borrowed set of sub-pixels from neighboring pixels, subjecting the set of associated sub-pixels and the borrowed set of pixels (A, B, C, D) to adaptive filtering to create an adaptively filtered set of sub-pixels (AA, BB, CC, DD), and subjecting at least the adaptively filtered set of sub-pixels (AA, BB, CC, DD) to further filtering to compute a final pixel adapted for display. Preferably, the set of associated sub-pixels fulfils at least one of the following requirements:
the set includes two associated sub-pixels and the set includes associated sub-pixels placed on triangle edges.
Abstract:
A graphic system comprising a pipelined tridimensional graphic engine for generating image frames for a display inlcudes a graphic engine (110;210) comprising at least one geometric processing elaboration stages (111, 112), performing motion motion extraction. The engine also includes a rendering stage (113) generating full image frames (KF) at a first frame rate (F2) to be displayed at a second frame rate (F1), higher than the first frame rate (F2). The pipelined graphic engine further comprises a motion encoder (214) receiving motion vector information (MB) and suitable for coding the motion information e.g. with a variable length code, while generating a signal (R4) representative of interpolated frames (IF). The motion encoder (214) exploits the motion information (MB) as generated by the geometric elaboration stages (211, 212). A motion compensation stage (237) is provided fed with the signal representative of interpolated frames (IF) and full image frames for generating said the interpolated frames (IF). A preferred application is in graphic engines intended to operate in association with smart displays through a wireless connection, i.e. in mobile phones.
Abstract:
A process for realizing an estimate of global motion based on a sequence of subsequent video images, such as those received via an optical mouse (M) for the purposes of detecting its movement. Subsequent video images are represented by digital signals arranged in frames and for each estimate of a frame with respect to another, the procedure provides operations for:
choosing, from amongst a series of vectors originating from linear combinations of motion vectors resulting from estimates of previous frames and/or constant vectors, a vector considered as the best match for estimating the motion occurring between the two frames, the said selection operation in turn including the operations of: performing a virtual overlay of the two frames to be compared (T 0 , T 0 -1) mutually offset both horizontally and vertically by amounts identified by the motion vector subjected to testing, applying a selective grid of pixels to be subjected to testing, calculating, for all pixels selected via the grid, a cost function to determine the effectiveness of the predictor, identifying the vector with the lowest cost function value as the best for the purposes of estimation.
Abstract:
A method for texture compressing images having a plurality of color components (R, G, B) includes the step of defining color representatives for use in encoding by defining groups of colors for each said color component (R,G,B), and selecting for each said group a representative color for the group, the median color being chosen as the representative color of the group. Each said group is preferably composed of 3 to 15 colors and the method includes the step of computing, for each group, an error between each member of the group and said representative color of the group. Typically, the error is computed as the sum of the absolute differences (SAD) between each member of the group and said representative color of the group. In order to select each said group and then extract therefrom said representative color, a criterium is used selected from the group consisting of:
selecting the group that minimizes said error by assuming each group comprised of the lower colors sorted in ascending order, wherein the same applies for the groups comprised of the higher colors, accruing the error as computed separately for two groups in all possible combinations and finding the minimum of the composite error.