Generating graphical representation of facial expressions of a user wearing a head mounted display accounting for previously captured images of the user's facial expressions
Abstract:
A virtual reality (VR) or augmented reality (AR) head mounted display (HMD) includes various image capture devices that capture images of portions of the user's face. Through image analysis, points of each portion of the user's face are identified from the images and their movement is tracked. The identified points are mapped to a three dimensional model of a face. From the identified points, a blendshape vector is determined for each captured image, resulting in various vectors indicating the user's facial expressions. A direct expression model that directly maps images to blendshape coefficients for a set of facial expressions based on captured information from a set of users may augment the blendshape vector in various embodiments. From the blendshape vectors and transforms mapping the captured images to three dimensions, the three dimensional model of the face is altered to render the user's facial expressions.
Information query
Patent Agency Ranking
0/0