REDUCING COLLISION-BASED DEFECTS IN MOTION-STYLIZATION OF VIDEO CONTENT DEPICTING CLOSELY SPACED FEATURES

    公开(公告)号:US20190259214A1

    公开(公告)日:2019-08-22

    申请号:US15899503

    申请日:2018-02-20

    Applicant: Adobe Inc.

    Abstract: Embodiments involve reducing collision-based defects in motion-stylizations. For example, a device obtains facial landmark data from video data. The facial landmark data includes a first trajectory traveled by a first point tracking one or more facial features, and a second trajectory traveled by a second point tracking one or more facial features. The device applies a motion-stylization to the facial landmark data that causes a first change to one or more of the first trajectory and the second trajectory. The device also identifies a new collision between the first and second points that is introduced by the first change. The device applies a modified stylization to the facial landmark data that causes a second change to one or more of the first trajectory and the second trajectory. If the new collision is removed by the second change, the device outputs the facial landmark data with the modified stylization applied.

    Garment rendering techniques
    23.
    发明授权

    公开(公告)号:US12165260B2

    公开(公告)日:2024-12-10

    申请号:US17715646

    申请日:2022-04-07

    Abstract: Systems and methods are described for rendering garments. The system includes a first machine learning model trained to generate coarse garment templates of a garment and a second machine learning model trained to render garment images. The first machine learning model generates a coarse garment template based on position data. The system produces a neural texture for the garment, the neural texture comprising a multi-dimensional feature map characterizing detail of the garment. The system provides the coarse garment template and the neural texture to the second machine learning model trained to render garment images. The second machine learning model generates a rendered garment image of the garment based on the coarse garment template of the garment and the neural texture.

    DIGITAL IMAGE DECALING
    24.
    发明申请

    公开(公告)号:US20240378809A1

    公开(公告)日:2024-11-14

    申请号:US18316490

    申请日:2023-05-12

    Applicant: Adobe Inc.

    Abstract: Decal application techniques as implemented by a computing device are described to perform decaling of a digital image. In one example, learned features of a digital image using machine learning are used by a computing device as a basis to predict the surface geometry of an object in the digital image. Once the surface geometry of the object is predicted, machine learning techniques are then used by the computing device to configure an overlay object to be applied onto the digital image according to the predicted surface geometry of the overlaid object.

    Systems and methods for mesh generation

    公开(公告)号:US12067680B2

    公开(公告)日:2024-08-20

    申请号:US17816813

    申请日:2022-08-02

    Applicant: ADOBE INC.

    CPC classification number: G06T17/20

    Abstract: Systems and methods for mesh generation are described. One aspect of the systems and methods includes receiving an image depicting a visible portion of a body; generating an intermediate mesh representing the body based on the image; generating visibility features indicating whether parts of the body are visible based on the image; generating parameters for a morphable model of the body based on the intermediate mesh and the visibility features; and generating an output mesh representing the body based on the parameters for the morphable model, wherein the output mesh includes a non-visible portion of the body that is not depicted by the image.

    GENERATING ANIMATED DIGITAL VIDEOS UTILIZING A CHARACTER ANIMATION NEURAL NETWORK INFORMED BY POSE AND MOTION EMBEDDINGS

    公开(公告)号:US20230123820A1

    公开(公告)日:2023-04-20

    申请号:US17502714

    申请日:2021-10-15

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and method that utilize a character animation neural network informed by motion and pose signatures to generate a digital video through person-specific appearance modeling and motion retargeting. In particular embodiments, the disclosed systems implement a character animation neural network that includes a pose embedding model to encode a pose signature into spatial pose features. The character animation neural network further includes a motion embedding model to encode a motion signature into motion features. In some embodiments, the disclosed systems utilize the motion features to refine per-frame pose features and improve temporal coherency. In certain implementations, the disclosed systems also utilize the motion features to demodulate neural network weights used to generate an image frame of a character in motion based on the refined pose features.

Patent Agency Ranking