Digital object animation using control points

    公开(公告)号:US11861779B2

    公开(公告)日:2024-01-02

    申请号:US17550432

    申请日:2021-12-14

    Applicant: Adobe Inc.

    CPC classification number: G06T13/80 G06F3/012 G06F3/017 G06T7/33

    Abstract: Digital object animation techniques are described. In a first example, translation-based animation of the digital object operates using control points of the digital object. In another example, the animation system is configured to minimize an amount of feature positions that are used to generate the animation. In a further example, an input pose is normalized through use of a global scale factor to address changes in a z-position of a subject in different digital images. Yet further, a body tracking module is used to computing initial feature positions. The initial feature positions are then used to initialize a face tracker module to generate feature positions of the face. The animation system also supports a plurality of modes used to generate the digital object, techniques to define a base of the digital object, and a friction term limiting movement of features positions based on contact with a ground plane.

    Predicting secondary motion of multidimentional objects based on local patch features

    公开(公告)号:US11830138B2

    公开(公告)日:2023-11-28

    申请号:US17206813

    申请日:2021-03-19

    Applicant: ADOBE INC.

    CPC classification number: G06T17/20 G06N3/08 G06T7/20 G06T15/08

    Abstract: Various disclosed embodiments are directed to estimating that a first vertex of a patch will change from a first position to a second position (the second position being at least partially indicative of secondary motion) based at least in part on one or more features of: primary motion data, one or more material properties, and constraint data associated with the particular patch. Such estimation can be made for some or all of the patches of an entire volumetric mesh in order to accurately predict the overall secondary motion of an object. This, among other functionality described herein resolves the inaccuracies, computer resource consumption, and the user experience of existing technologies.

    Generative shape creation and editing

    公开(公告)号:US11037341B1

    公开(公告)日:2021-06-15

    申请号:US16744105

    申请日:2020-01-15

    Applicant: Adobe Inc.

    Abstract: Generative shape creation and editing is leveraged in a digital medium environment. An object editor system represents a set of training shapes as sets of visual elements known as “handles,” and converts sets of handles into signed distance field (SDF) representations. A handle processor model is then trained using the SDF representations to enable the handle processor model to generate new shapes that reflect salient visual features of the training shapes. The trained handle processor model, for instance, generates new sets of handles based on salient visual features learned from the training handle set. Thus, utilizing the described techniques, accurate characterizations of a set of shapes can be learned and used to generate new shapes. Further, generated shapes can be edited and transformed in different ways.

    Generation of parameterized avatars

    公开(公告)号:US10607065B2

    公开(公告)日:2020-03-31

    申请号:US15970831

    申请日:2018-05-03

    Applicant: Adobe Inc.

    Abstract: Generation of parameterized avatars is described. An avatar generation system uses a trained machine-learning model to generate a parameterized avatar, from which digital visual content (e.g., images, videos, augmented and/or virtual reality (AR/VR) content) can be generated. The machine-learning model is trained to identify cartoon features of a particular style—from a library of these cartoon features—that correspond to features of a person depicted in a digital photograph. The parameterized avatar is data (e.g., a feature vector) that indicates the cartoon features identified from the library by the trained machine-learning model for the depicted person. This parameterization enables the avatar to be animated. The parameterization also enables the avatar generation system to generate avatars in non-photorealistic (relatively cartoony) styles such that, despite the style, the avatars preserve identities and expressions of persons depicted in input digital photographs.

    Realistically illuminated virtual objects embedded within immersive environments

    公开(公告)号:US10600239B2

    公开(公告)日:2020-03-24

    申请号:US15877142

    申请日:2018-01-22

    Applicant: ADOBE INC.

    Abstract: Matching an illumination of an embedded virtual object (VO) with current environment illumination conditions provides an enhanced immersive experience to a user. To match the VO and environment illuminations, illumination basis functions are determined based on preprocessing image data, captured as a first combination of intensities of direct illumination sources illuminates the environment. Each basis function corresponds to one of the direct illumination sources. During the capture of runtime image data, a second combination of intensities illuminates the environment. An illumination-weighting vector is determined based on the runtime image data. The determination of the weighting vector accounts for indirect illumination sources, such as surface reflections. The weighting vector encodes a superposition of the basis functions that corresponds to the second combination of intensities. The method illuminates the VO based on the weighting vector. The resulting illumination of the VO matches the second combination of the intensities and surface reflections.

    SYNTHESIZING HAIR FEATURES IN IMAGE CONTENT BASED ON ORIENTATION DATA FROM USER GUIDANCE

    公开(公告)号:US20190295272A1

    公开(公告)日:2019-09-26

    申请号:US15928520

    申请日:2018-03-22

    Applicant: Adobe Inc.

    Abstract: Certain embodiments involve synthesizing image content depicting facial hair or other hair features based on orientation data obtained using guidance inputs or other user-provided guidance data. For instance, a graphic manipulation application accesses guidance data identifying a desired hair feature and an appearance exemplar having image data with color information for the desired hair feature. The graphic manipulation application transforms the guidance data into an input orientation map. The graphic manipulation application matches the input orientation map to an exemplar orientation map having a higher resolution than the input orientation map. The graphic manipulation application generates the desired hair feature by applying the color information from the appearance exemplar to the exemplar orientation map. The graphic manipulation application outputs the desired hair feature at a presentation device.

Patent Agency Ranking