VIDEO EDITING USING IMAGE DIFFUSION

    公开(公告)号:US20250111866A1

    公开(公告)日:2025-04-03

    申请号:US18479626

    申请日:2023-10-02

    Applicant: Adobe Inc.

    Abstract: Embodiments are disclosed for editing video using image diffusion. The method may include receiving an input video depicting a target and a prompt including an edit to be made to the target. A keyframe associated with the input video is then identified. The keyframe is edited, using a generative neural network, based on the prompt to generate an edited keyframe. A subsequent frame of the input video is edited using the generative neural network, based on the prompt, features of the edited keyframe, and features of an intervening frame to generate an edited output video.

    CREATING CINEMAGRAPHS FROM A SINGLE IMAGE

    公开(公告)号:US20240404155A1

    公开(公告)日:2024-12-05

    申请号:US18325645

    申请日:2023-05-30

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that utilizes neural networks to generate cinemagraphs from single RGB images. For example, the cyclic animation system includes a cyclic animation neural network trained with synthetic data, wherein different wind effects can be replicated using physically based simulations to create cyclic videos more efficiently. More specifically, the cyclic animation system generalizes a solution by operating in the gradient domain and using surface normal maps. Because normal maps are invariant to appearance (color, texture, illumination, etc.), the gap between synthetic and real data distribution in the normal map space is smaller than in the RGB space. The cyclic animation system performs a reshading approach that synthesizes RGB pixels from the original image and the animated normal maps to create plausible changes to the real image to create the cinemagraph.

    Resolving garment collisions using neural networks

    公开(公告)号:US11978144B2

    公开(公告)日:2024-05-07

    申请号:US17875081

    申请日:2022-07-27

    Applicant: Adobe Inc.

    CPC classification number: G06T13/40 G06T2210/16 G06T2210/21

    Abstract: Embodiments are disclosed for using machine learning models to perform three-dimensional garment deformation due to character body motion with collision handling. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input, the input including character body shape parameters and character body pose parameters defining a character body, and garment parameters. The disclosed systems and methods further comprise generating, by a first neural network, a first set of garment vertices defining deformations of a garment with the character body based on the input. The disclosed systems and methods further comprise determining, by a second neural network, that the first set of garment vertices includes a second set of garment vertices penetrating the character body. The disclosed systems and methods further comprise modifying, by a third neural network, each garment vertex in the second set of garment vertices to positions outside the character body.

    SYSTEMS AND METHODS FOR MESH GENERATION
    14.
    发明公开

    公开(公告)号:US20240046566A1

    公开(公告)日:2024-02-08

    申请号:US17816813

    申请日:2022-08-02

    Applicant: ADOBE INC.

    CPC classification number: G06T17/20

    Abstract: Systems and methods for mesh generation are described. One aspect of the systems and methods includes receiving an image depicting a visible portion of a body; generating an intermediate mesh representing the body based on the image; generating visibility features indicating whether parts of the body are visible based on the image; generating parameters for a morphable model of the body based on the intermediate mesh and the visibility features; and generating an output mesh representing the body based on the parameters for the morphable model, wherein the output mesh includes a non-visible portion of the body that is not depicted by the image.

    INSERTING THREE-DIMENSIONAL OBJECTS INTO DIGITAL IMAGES WITH CONSISTENT LIGHTING VIA GLOBAL AND LOCAL LIGHTING INFORMATION

    公开(公告)号:US20230360320A1

    公开(公告)日:2023-11-09

    申请号:US18354619

    申请日:2023-07-18

    Applicant: Adobe Inc.

    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate realistic shading for three-dimensional objects inserted into digital images. The disclosed system utilizes a light encoder neural network to generate a representation embedding of lighting in a digital image. Additionally, the disclosed system determines points of the three-dimensional object visible within a camera view. The disclosed system generates a self-occlusion map for the digital three-dimensional object by determining whether fixed sets of rays uniformly sampled from the points intersects with the digital three-dimensional object. The disclosed system utilizes a generator neural network to determine a shading map for the digital three-dimensional object based on the representation embedding of lighting in the digital image and the self-occlusion map. Additionally, the disclosed system generates a modified digital image with the three-dimensional object inserted into the digital image with consistent lighting of the three-dimensional object and the digital image.

    Motion Retargeting with Kinematic Constraints

    公开(公告)号:US20210343059A1

    公开(公告)日:2021-11-04

    申请号:US16864724

    申请日:2020-05-01

    Applicant: Adobe Inc.

    Abstract: Motion retargeting with kinematic constraints is implemented in a digital medium environment. Generally, the described techniques provide for retargeting motion data from a source motion sequence to a target visual object. Accordingly, the described techniques position a target visual object in a defined visual environment to identify kinematic constraints of the target object relative to the visual environment. Further, the described techniques utilize an iterative optimization process that fine tunes the conformance of retargeted motion of a target object to the identified kinematic constraints.

    Generative Shape Creation and Editing

    公开(公告)号:US20210264649A1

    公开(公告)日:2021-08-26

    申请号:US17317246

    申请日:2021-05-11

    Applicant: Adobe Inc.

    Abstract: Generative shape creation and editing is leveraged in a digital medium environment. An object editor system represents a set of training shapes as sets of visual elements known as “handles,” and converts sets of handles into signed distance field (SDF) representations. A handle processor model is then trained using the SDF representations to enable the handle processor model to generate new shapes that reflect salient visual features of the training shapes. The trained handle processor model, for instance, generates new sets of handles based on salient visual features learned from the training handle set. Thus, utilizing the described techniques, accurate characterizations of a set of shapes can be learned and used to generate new shapes. Further, generated shapes can be edited and transformed in different ways.

    INTUITIVE EDITING OF THREE-DIMENSIONAL MODELS

    公开(公告)号:US20210256775A1

    公开(公告)日:2021-08-19

    申请号:US17208627

    申请日:2021-03-22

    Applicant: ADOBE INC.

    Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.

    Generating augmented reality objects on real-world surfaces using a digital writing device

    公开(公告)号:US10825253B2

    公开(公告)日:2020-11-03

    申请号:US16375549

    申请日:2019-04-04

    Applicant: Adobe Inc.

    Abstract: The present disclosure includes systems, methods, computer readable media, and devices that can generate accurate augmented reality objects based on tracking a writing device in relation to a real-world surface. In particular, the systems and methods described herein can detect an initial location of a writing device, and further track movement of the writing device on a real-world surface based on one or more sensory inputs. For example, disclosed systems and methods can generate an augmented reality object based on pressure detected at a tip of a writing device, based on orientation of the writing device, based on motion detector elements of the writing device (e.g., reflective materials, emitters, or object tracking shapes), and/or optical sensors. The systems and methods further render augmented reality objects within an augmented reality environment that appear on the real-world surface based on tracking the movement of the writing device.

    Editing digital images utilizing a neural network with an in-network rendering layer

    公开(公告)号:US10430978B2

    公开(公告)日:2019-10-01

    申请号:US15448206

    申请日:2017-03-02

    Applicant: Adobe Inc.

    Abstract: The present disclosure includes methods and systems for generating modified digital images utilizing a neural network that includes a rendering layer. In particular, the disclosed systems and methods can train a neural network to decompose an input digital image into intrinsic physical properties (e.g., such as material, illumination, and shape). Moreover, the systems and methods can substitute one of the intrinsic physical properties for a target property (e.g., a modified material, illumination, or shape). The systems and methods can utilize a rendering layer trained to synthesize a digital image to generate a modified digital image based on the target property and the remaining (unsubstituted) intrinsic physical properties. Systems and methods can increase the accuracy of modified digital images by generating modified digital images that realistically reflect a confluence of intrinsic physical properties of an input digital image and target (i.e., modified) properties.

Patent Agency Ranking