Abstract:
Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, the apparatus may include a gesture tracker and an animation engine. The gesture tracker may be configured to detect and track a user gesture that corresponds to a canned facial expression, the user gesture including a duration component corresponding to a duration the canned facial expression is to be animated. Further, the gesture tracker may be configured to respond to a detection and tracking of the user gesture, and output one or more animation messages that describe the detected/tracked user gesture or identify the canned facial expression, and the duration. The animation engine may be configured to receive the one or more animation messages, and drive an avatar model, in accordance with the one or more animation messages, to animate the avatar with animation of the canned facial expressions for the duration. Other embodiments may be described and/or claimed.
Abstract:
Systems and methods may provide for identifying one or more facial expressions of a subject in a video signal and generating avatar animation data based on the one or more facial expressions. Additionally, the avatar animation data may be incorporated into an audio file associated with the video signal. In one example, the audio file is sent to a remote client device via a messaging application. Systems and methods may also facilitate the generation of avatar icons and doll animations that mimic the actual facial features and/or expressions of specific individuals.
Abstract:
Methods, apparatuses and storage medium associated with cooperative provision of personalized user functions using shared device and personal device are disclosed herein. In various embodiments, a personal device (PD) method may include receiving, by a personal device of a user, a request to perform a user function to be cooperatively provided by the personal device and a shared device (SD) configured for use by multiple users; and cooperating with the shared device, by the personal device, to provide the requested user function personalized to the user of the personal device. In various embodiments, a SD method may include similar receiving and cooperating operations, performed by the SD. Other embodiments may be disclosed or claimed.
Abstract:
Methods, apparatuses, and articles associated with gesture recognition using depth images are disclosed herein. In various embodiments, an apparatus may include a face detection engine configured to determine whether a face is present in one or more gray images of respective image frames generated by a depth camera, and a hand tracking engine configured to track a hand in one or more depth images generated by the depth camera. The apparatus may further include a feature extraction and gesture inference engine configured to extract features based on results of the tracking by the hand tracking engine, and infer a hand gesture based at least in part on the extracted features. Other embodiments may also be disclosed and claimed.