Abstract:
A processor-implemented facial image generating method includes: determining a first feature vector associated with a pose and a second feature vector associated with an identity by encoding an input image including a face; determining a flipped first feature vector by flipping the first feature vector with respect to an axis in a corresponding space; determining an assistant feature vector based on the flipped first feature vector and rotation information corresponding to the input image; determining a final feature vector based on the first feature vector and the assistant feature vector; and generating an output image including a rotated face by decoding the final feature vector and the second feature vector based on the rotation information.
Abstract:
A processor-implemented method with liveness detection includes: receiving a plurality of phase images of different phases; generating a plurality of preprocessed phase images by performing preprocessing, including edge enhancement processing, on the plurality of phase images of different phases; generating a plurality of differential images based on the preprocessed phase images; generating a plurality of low-resolution differential images having lower resolutions than the differential images, based on the differential images; generating a minimum map image based on the low-resolution differential images; and performing a liveness detection on an object in the phase images based on the minimum map image.
Abstract:
An on-device training-based user recognition method includes performing on-device training on a feature extractor based on reference data corresponding to generalized users and user data, determining a registration feature vector based on an output from the feature extractor in response to the input of the user data, determining a test feature vector based on an output from the feature extractor in response to an input of test data, and performing user recognition on a test user based on a result of comparing the registration feature vector to the test feature vector.
Abstract:
A convolutional neural network (CNN) processing method and apparatus is disclosed. The apparatus may select, based on at least one of a characteristic of a kernel of a convolution layer or a characteristic of an input of the convolution layer, one operation mode from a first operation mode reusing the kernel and a second operation mode reusing the input, and perform a convolution operation based on the selected operation mode.
Abstract:
A training method of training an illumination compensation model includes extracting, from a training image, an albedo image of a face area, a surface normal image of the face area, and an illumination feature, the extracting being based on an illumination compensation model; generating an illumination restoration image based on the albedo image, the surface normal image, and the illumination feature; and training the illumination compensation model based on the training image and the illumination restoration image.
Abstract:
A method of generating a three-dimensional (3D) face model includes extracting feature points of a face from input images comprising a first face image and a second face image; deforming a generic 3D face model to a personalized 3D face model based on the feature points; projecting the personalized 3D face model to each of the first face image and the second face image; and refining the personalized 3D face model based on a difference in texture patterns between the first face image to which the personalized 3D face model is projected and the second face image to which the personalized 3D face model is projected.
Abstract:
A method and apparatus for object recognition are provided. A processor-implemented method includes extracting feature maps including local feature representations from an input image, generating a global feature representation corresponding to the input image by fusing the local feature representations, and performing a recognition task on the input image based on the local feature representations and the global feature representation.
Abstract:
A method with biometric information spoof detection includes extracting an embedding vector from an intermediate layer of a neural network configured to detect whether biometric information of a user is spoofed from an image including the biometric information; detecting first information regarding whether the biometric information is spoofed, based on the embedding vector; and detecting second information regarding whether the biometric information is spoofed based on whether the first information is detected, using an output vector output from an output layer of the neural network.
Abstract:
A neural network processing method and apparatus based on nested bit representation is provided. The processing method includes obtaining first weights for a first layer of a source model of a first layer of a neural network, determining a bit-width for the first layer of the neural network, obtaining second weights for the first layer of the neural network by extracting at least one bit corresponding to the determined bit-width from each of the first weights for the first layer of a source model corresponding to the first layer of the neural network, and processing input data of the first layer of the neural network by executing the first layer of the neural network based on the obtained second weights.