Abstract:
Apparatuses, methods and storage medium associated with capturing images are disclosed herein. In embodiments, the apparatus may include a face tracker to receive an image frame, analyze the image frame for a face, and on identification of a face in the image frame, evaluate the face to determine whether the image frame comprises an acceptable or unacceptable face pose. Further, the face tracker may be configured to provide instructions for taking another image frame, on determination of the image frame having an unacceptable face pose, with the instructions designed to improve likelihood that the other image frame will comprise an acceptable face pose. Other embodiments may be described and/or claimed.
Abstract:
Examples of systems and methods for transmitting facial motion data and animating an avatar are generally described herein. A system may include an image capture device to capture a series of images of a face, a facial recognition module to compute facial motion data for each of the images in the series of images, and a communication module to transmit the facial motion data to an animation device, wherein the animation device is to use the facial motion data to animate an avatar on the animation device.
Abstract:
Apparatuses, methods and storage medium associated with creating an avatar video are disclosed herein. In embodiments, the apparatus may one or more facial expression engines, an animation-rendering engine, and a video generator. The one or more facial expression engines may be configured to receive video, voice and/or text inputs, and, in response, generate a plurality of animation messages having facial expression parameters that depict facial expressions for a plurality of avatars based at least in part on the video, voice and/or text inputs received. The animation-rendering engine may be configured to receive the one or more animation messages, and drive a plurality of avatar models, to animate and render the plurality of avatars with the facial expression depicted. The video generator may be configured to capture the animation and rendering of the plurality of avatars, to generate a video. Other embodiments may be described and/or claimed.
Abstract:
Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, the apparatus may include a gesture tracker and an animation engine. The gesture tracker may be configured to detect and track a user gesture that corresponds to a canned facial expression, the user gesture including a duration component corresponding to a duration the canned facial expression is to be animated. Further, the gesture tracker may be configured to respond to a detection and tracking of the user gesture, and output one or more animation messages that describe the detected/tracked user gesture or identify the canned facial expression, and the duration. The animation engine may be configured to receive the one or more animation messages, and drive an avatar model, in accordance with the one or more animation messages, to animate the avatar with animation of the canned facial expressions for the duration. Other embodiments may be described and/or claimed.
Abstract:
Systems and methods may provide for identifying one or more facial expressions of a subject in a video signal and generating avatar animation data based on the one or more facial expressions. Additionally, the avatar animation data may be incorporated into an audio file associated with the video signal. In one example, the audio file is sent to a remote client device via a messaging application. Systems and methods may also facilitate the generation of avatar icons and doll animations that mimic the actual facial features and/or expressions of specific individuals.
Abstract:
Methods, apparatuses, and articles associated with gesture recognition using depth images are disclosed herein. In various embodiments, an apparatus may include a face detection engine configured to determine whether a face is present in one or more gray images of respective image frames generated by a depth camera, and a hand tracking engine configured to track a hand in one or more depth images generated by the depth camera. The apparatus may further include a feature extraction and gesture inference engine configured to extract features based on results of the tracking by the hand tracking engine, and infer a hand gesture based at least in part on the extracted features. Other embodiments may also be disclosed and claimed.