Abstract:
The present invention discloses a bleed-through detection method and a bleed-through detection device. The method includes: obtaining a recto image and a verso image, thereby obtaining pixel pairs including the first points and the corresponding second points; determining some foreground pixels and some background pixels; performing modeling for four types of pixel pairs, so as to form four models; calculating, for a pixel pair that hasn't been modeled, similarities of the pixel pair with respect to the four models respectively, so as to determine a type of the pixel pair; and judging, as bleed-through on the verso image, a second point determined as a background pixel which corresponds to a first point determined as a foreground pixel, and judging, as bleed-through on the recto image, a first point determined as a background pixel which corresponds to a second point determined as a foreground pixel.
Abstract:
An apparatus and a method for data processing are provided. The apparatus for data processing includes a modeler configured to build an occlusion object model for an image containing an occlusion object; a renderer configured to render the occlusion object model according to a geometric relationship between the occlusion object and a face image containing no occlusion object, such that the rendered occlusion object image and the face image containing no occlusion object have same scale and attitude; and a merger configured to merge the face image containing no occlusion object and the rendered occlusion object image into an occluded face image. With the data processing apparatus and the data processing method face data enhancement, face data in the case of having an occlusion object is generated, so that the number of face training data sets can be effectively increased, thereby improving performance of a face-related module.
Abstract:
An image processing apparatus and an image processing method where the apparatus includes: a self-encoder configured to perform self-encoding on an input image to generate multiple feature maps; a parameter generator configured to generate multiple convolution kernels for a convolution neural network based on the multiple feature maps; and an outputter configured to generate, by using the convolution neural network, an output result of the input image based on the input image and the multiple convolution kernels. With the image processing apparatus and the image processing method according to the present disclosure, an accuracy of processing an image by using the CNN network can be improved.
Abstract:
A multi-view vector processing method and a multi-view vector processing device are provided. A multi-view vector x represents an object containing information on at least two non-discrete views. A model of the multi-view vector, where the model includes at least components of: a population mean μ of the multi-view vector, view component of each view of the multi-view vector and noise is established. The population mean μ, parameters of each view component and parameters of the noise , are obtained by using training data of the multi-view vector x. The device includes a processor and a storage medium storing program codes, and the program codes implements the aforementioned method when being executed by the processor.
Abstract:
An apparatus for training a classification model includes: a feature extraction unit configured to set, with respect to each training set of a first predetermined number of training sets, feature extraction layers, and extract features of a sample image, where at least two of the training sets at least partially overlap; a feature fusion unit configured to set, with respect to training set, feature fusion layers, and perform a fusion on the extracted features of the sample image; and a loss determination unit configured to set, with respect to training set, a loss determination layer, calculate a loss function of the sample image based on the fused feature of the sample image, and train a classification model based on the loss function. The first predetermined number of training sets share at least one layer of feature fusion layers and feature extraction layers set with respect to each training set.
Abstract:
An apparatus for training a classifying model comprises: a first obtaining unit configured to input a sample image to a first machine learning framework, to obtain a first classification probability and a first classification loss; a second obtaining unit configured to input a second image to a second machine learning framework, to obtain a second classification probability and a second classification loss, the two machine learning frameworks having identical structures and sharing identical parameters; a similarity loss calculating unit configured to calculate a similarity loss related to a similarity between the first classification probability and the second classification probability; a total loss calculating unit configured to calculate the sum of the similarity loss, the first classification loss and the second classification loss, as a total loss; and a training unit configured to adjust parameters of the two machine learning frameworks to obtain a trained classifying model.
Abstract:
An identity verification method and an identity verification apparatus based on a voiceprint are provided. The identity verification method based on a voiceprint includes: receiving an unknown voice; extracting a voiceprint of the unknown voice using a neural network-based voiceprint extractor which is obtained through pre-training; concatenating the extracted voiceprint with a pre-stored voiceprint to obtain a concatenated voiceprint; and performing judgment on the concatenated voiceprint using a pre-trained classification model, to verify whether the extracted voiceprint and the pre-stored voiceprint are from a same person. With the identity verification method and the identity verification apparatus, a holographic voiceprint of the speaker can be extracted from a short voice segment, such that the verification result is more robust.
Abstract:
An apparatus and a method for data processing are provided. The apparatus for data processing includes a modeler configured to build an occlusion object model for an image containing an occlusion object; a renderer configured to render the occlusion object model according to a geometric relationship between the occlusion object and a face image containing no occlusion object, such that the rendered occlusion object image and the face image containing no occlusion object have same scale and attitude; and a merger configured to merge the face image containing no occlusion object and the rendered occlusion object image into an occluded face image. With the data processing apparatus and the data processing method face data enhancement, face data in the case of having an occlusion object is generated, so that the number of face training data sets can be effectively increased, thereby improving performance of a face-related module.
Abstract:
An information processing method and an information processing apparatus are disclosed, where the information processing method includes: inputting a plurality of samples to a classifier respectively, to extract a feature vector representing a feature of each sample; and updating parameters of the classifier by minimizing a loss function for the plurality of samples, wherein the loss function is in positive correlation with an intra-class distance for representing a distance between feature vectors of samples belonging to a same class, and is in negative correlation with an inter-class distance for representing a distance between feature vectors of samples belonging to different classes, wherein the intra-class distance of each sample of the plurality of samples is less than a first threshold, the inter-class distance between two different classes is greater than a second threshold, and the second threshold is greater than twice the first threshold.
Abstract:
Embodiments describe an image retrieval apparatus. The image retrieval apparatus includes an unlabelled image selector for selecting one or more unlabelled image(s) from an image database; and a main learner for training in each feedback round of the image retrieval, estimating relevance of images in the image database and a user's intention, and determining retrieval results, wherein the main learner makes use of the unlabelled image(s) selected by the unlabelled image selector in the estimation. In addition, the image retrieval apparatus may also include an active selector for selecting, in each feedback round and according to estimation results of the main learner, one or more unlabelled image(s) from the image database for the user to label.