Abstract:
A digital image color correction device employing fuzzy logic, for correcting a facial tone image portion of a digital video image, characterized in that it comprises:
a pixel fuzzifier unit (1) receiving in input a stream of pixels belonging to a sequence of correlated frames of a digital video image and computing a multi level value representing a membership of each pixel to a skin color class; a global parameter estimator (2) receiving in input each of said pixel and the relative membership value, and computing a first and a second parameter which define the characteristics of a portion of said image that belongs to said skin color class; a processing unit (3) connected downstream to said global parameter estimator and to said pixel fuzzifier unit and adapted to correct each of the pixels of said portion of the image that belongs to said skin color class, according to said first global parameter (300), to obtain corrected pixels; and a processing switch (4) for outputting said pixels or said corrected pixels according to said second global parameter (400).
Abstract:
A method and a device for motion estimated and compensated Field Rate Up-conversion (FRU) for video applications, providing for: a) dividing an image field to be interpolated into a plurality of image blocks (IB), each image block made up of a respective set of image elements of the image field to be interpolated; b) for each image block (K(x,y)) of at least a sub-plurality (Q1,Q2) of said plurality of image blocks, considering a group of neighboring image blocks (NB[1]-NB[4]); c) determining an estimated motion vector for said image block (K(x,y)), describing the movement of said image block (K(x,y)) from a previous image field to a following image field between which the image field to be interpolated is comprised, on the basis of predictor motion vectors (P[1]-P[4]) associated to said group of neighboring image blocks; d) determining each image element of said image block (K(x,y)) by interpolation of two corresponding image elements in said previous and following image fields related by said estimated motion vector. Step c) provides for: c1) applying to the image block (K(x,y)) each of said predictor motion vectors to determine a respective pair of corresponding image blocks in said previous and following image fields, respectively; c2) for each of said pairs of corresponding image blocks, evaluating an error function (err[i]) which is the Sum of luminance Absolute Difference (SAD) between corresponding image elements in said pair of corresponding image blocks; c3) for each pair of said predictor motion vectors, evaluating a degree of homogeneity (H(i,j)); c4) for each pair of said predictor motion vectors, applying a fuzzy rule having an activation level (r[k]) which is higher the higher the degree of homogeneity of the pair of predictor motion vectors and the smaller the error functions of the pair of predictor motion vectors; c5) determining an optimum fuzzy rule having the highest activation level (r[opt]), and determining the best predictor motion vector (P[min]) of the pair associated to said optimum fuzzy rule having the smaller error function; c6) determining the estimated motion vector for said image block (K(x,y)) on the basis of said best predictor motion vector (P[min]).