Abstract:
The filtering device (80) comprises a neuro-fuzzy filter (1; 80) and implements a moving-average filtering technique in which the weights for final reconstruction of the signal ( oL 3 ( i )) are calculated in a neuro-fuzzy network (3) according to specific fuzzy rules. The fuzzy rules operate on three signal features ( X 1( i ), X 2( i ), X 3( i )) for each input sample ( e ( i )). The signal features are correlated to the position of the sample in the considered sample window, to the difference between a sample and the sample at the center of the window, and to the difference between a sample and the average of the samples in the window. The filter device for the analysis of a voice signal comprises a bank of neuro-fuzzy filters (86, 87). The signal is split into a number of sub-bands, according to wavelet theory, using a bank of analysis filters including a pair of FIR QMFs ( H 0 , H 1 ) and a pair of downsamplers (85, 86); each sub-band signal is filtered by a neuro-fuzzy filter (86, 87), and then the various sub-bands are reconstructed by a bank of synthesis filters including a pair of upsamplers (88, 89), a pair of FIR QMFs ( G 0 , G 1 ), and an adder node (92).
Abstract:
A digital image color correction device employing fuzzy logic, for correcting a facial tone image portion of a digital video image, characterized in that it comprises:
a pixel fuzzifier unit (1) receiving in input a stream of pixels belonging to a sequence of correlated frames of a digital video image and computing a multi level value representing a membership of each pixel to a skin color class; a global parameter estimator (2) receiving in input each of said pixel and the relative membership value, and computing a first and a second parameter which define the characteristics of a portion of said image that belongs to said skin color class; a processing unit (3) connected downstream to said global parameter estimator and to said pixel fuzzifier unit and adapted to correct each of the pixels of said portion of the image that belongs to said skin color class, according to said first global parameter (300), to obtain corrected pixels; and a processing switch (4) for outputting said pixels or said corrected pixels according to said second global parameter (400).
Abstract:
A method and a device for motion estimated and compensated Field Rate Up-conversion (FRU) for video applications, providing for: a) dividing an image field to be interpolated into a plurality of image blocks (IB), each image block made up of a respective set of image elements of the image field to be interpolated; b) for each image block (K(x,y)) of at least a sub-plurality (Q1,Q2) of said plurality of image blocks, considering a group of neighboring image blocks (NB[1]-NB[4]); c) determining an estimated motion vector for said image block (K(x,y)), describing the movement of said image block (K(x,y)) from a previous image field to a following image field between which the image field to be interpolated is comprised, on the basis of predictor motion vectors (P[1]-P[4]) associated to said group of neighboring image blocks; d) determining each image element of said image block (K(x,y)) by interpolation of two corresponding image elements in said previous and following image fields related by said estimated motion vector. Step c) provides for: c1) applying to the image block (K(x,y)) each of said predictor motion vectors to determine a respective pair of corresponding image blocks in said previous and following image fields, respectively; c2) for each of said pairs of corresponding image blocks, evaluating an error function (err[i]) which is the Sum of luminance Absolute Difference (SAD) between corresponding image elements in said pair of corresponding image blocks; c3) for each pair of said predictor motion vectors, evaluating a degree of homogeneity (H(i,j)); c4) for each pair of said predictor motion vectors, applying a fuzzy rule having an activation level (r[k]) which is higher the higher the degree of homogeneity of the pair of predictor motion vectors and the smaller the error functions of the pair of predictor motion vectors; c5) determining an optimum fuzzy rule having the highest activation level (r[opt]), and determining the best predictor motion vector (P[min]) of the pair associated to said optimum fuzzy rule having the smaller error function; c6) determining the estimated motion vector for said image block (K(x,y)) on the basis of said best predictor motion vector (P[min]).