-
公开(公告)号:US20240144623A1
公开(公告)日:2024-05-02
申请号:US18304147
申请日:2023-04-20
Applicant: Adobe Inc.
Inventor: Giorgio Gori , Yi Zhou , Yangtuanfeng Wang , Yang Zhou , Krishna Kumar Singh , Jae Shin Yoon , Duygu Ceylan Aksit
CPC classification number: G06T19/20 , G06T7/70 , G06T15/00 , G06T17/00 , G06T2200/24 , G06T2207/20084 , G06T2207/30196 , G06T2207/30244 , G06T2219/2004
Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images. The disclosed systems further use three-dimensional representations of two-dimensional images to customize focal points for the two-dimensional images.
-
公开(公告)号:US20240144574A1
公开(公告)日:2024-05-02
申请号:US18397413
申请日:2023-12-27
Applicant: Adobe Inc.
Inventor: Jun Saito , Jimei Yang , Duygu Ceylan Aksit
Abstract: Digital object animation techniques are described. In a first example, translation-based animation of the digital object operates using control points of the digital object. In another example, the animation system is configured to minimize an amount of feature positions that are used to generate the animation. In a further example, an input pose is normalized through use of a global scale factor to address changes in a z-position of a subject in different digital images. Yet further, a body tracking module is used to computing initial feature positions. The initial feature positions are then used to initialize a face tracker module to generate feature positions of the face. The animation system also supports a plurality of modes used to generate the digital object, techniques to define a base of the digital object, and a friction term limiting movement of features positions based on contact with a ground plane.
-
公开(公告)号:US20240135572A1
公开(公告)日:2024-04-25
申请号:US18190636
申请日:2023-03-27
Applicant: Adobe Inc.
Inventor: Krishna Kumar Singh , Yijun Li , Jingwan Lu , Duygu Ceylan Aksit , Yangtuanfeng Wang , Jimei Yang , Tobias Hinz
CPC classification number: G06T7/70 , G06T7/40 , G06V10/44 , G06V10/771 , G06V10/806 , G06V10/82 , G06T2207/20081 , G06T2207/30196
Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
-
公开(公告)号:US11682166B2
公开(公告)日:2023-06-20
申请号:US17201783
申请日:2021-03-15
Applicant: ADOBE INC.
Inventor: Eric-Tuan Le , Duygu Ceylan Aksit , Tamy Boubekeur , Radomir Meeh , Niloy Mitra , Minhyuk Sung
Abstract: Embodiments provide systems, methods, and computer storage media for fitting 3D primitives to a 3D point cloud. In an example embodiment, 3D primitives are fit to a 3D point cloud using a global primitive fitting network that evaluates the entire 3D point cloud and a local primitive fitting network that evaluates local patches of the 3D point cloud. The global primitive fitting network regresses a representation of larger (global) primitives that fit the global structure. To identify smaller 3D primitives for regions with fine detail, local patches are constructed by sampling from a pool of points likely to contain fine detail, and the local primitive fitting network regresses a representation of smaller (local) primitives that fit the local structure of each of the local patches. The global and local primitives are merged into a combined, multi-scale set of fitted primitives, and representative primitive parameters are computed for each fitted primitive.
-
公开(公告)号:US11625881B2
公开(公告)日:2023-04-11
申请号:US17486269
申请日:2021-09-27
Applicant: Adobe Inc.
Inventor: Ruben Eduardo Villegas , Jun Saito , Jimei Yang , Duygu Ceylan Aksit
Abstract: Motion retargeting with kinematic constraints is implemented in a digital medium environment. Generally, the described techniques provide for retargeting motion data from a source motion sequence to a target visual object. Accordingly, the described techniques position a target visual object in a defined visual environment to identify kinematic constraints of the target object relative to the visual environment. Further, the described techniques utilize an iterative optimization process that fine tunes the conformance of retargeted motion of a target object to the identified kinematic constraints.
-
公开(公告)号:US20230037339A1
公开(公告)日:2023-02-09
申请号:US17385559
申请日:2021-07-26
Applicant: Adobe Inc.
Inventor: Ruben Villegas , Jun Saito , Jimei Yang , Duygu Ceylan Aksit , Aaron Hertzmann
Abstract: One example method involves a processing device that performs operations that include receiving a request to retarget a source motion into a target object. Operations further include providing the target object to a contact-aware motion retargeting neural network trained to retarget the source motion into the target object. The contact-aware motion retargeting neural network is trained by accessing training data that includes a source object performing the source motion. The contact-aware motion retargeting neural network generates retargeted motion for the target object, based on a self-contact having a pair of input vertices. The retargeted motion is subject to motion constraints that: (i) preserve a relative location of the self-contact and (ii) prevent self-penetration of the target object.
-
公开(公告)号:US10755459B2
公开(公告)日:2020-08-25
申请号:US15297938
申请日:2016-10-19
Applicant: Adobe Inc.
Inventor: Zhili Chen , Srinivasa Madhava Phaneendra Angara , Duygu Ceylan Aksit , Byungmoon Kim , Gahye Park
IPC: G06T11/60 , G06F3/0481 , G06T11/00
Abstract: Techniques and systems are described herein that support improved object painting in digital images through use of perspectives and transfers in a digital medium environment. In one example, a user interacts with a two-dimensional digital image in a user interface output by a computing device to apply digital paint. The computing device fits a three-dimensional model to an object within the image, e.g., the face. The object, as fit to the three-dimensional model, is used to support output of a plurality of perspectives of a view of the object with which a user may interact to digitally paint the object. As part of this, digital paint as specified through the user inputs is applied directly by the computing device to a two-dimensional texture map of the object. This may support transfer of digital paint by a computing device between objects by transferring the digital paint using respective two-dimensional texture maps.
-
公开(公告)号:US10515456B2
公开(公告)日:2019-12-24
申请号:US15928520
申请日:2018-03-22
Applicant: Adobe Inc.
Inventor: Duygu Ceylan Aksit , Zhili Chen , Jose Ignacio Echevarria Vallespi , Kyle Olszewski
Abstract: Certain embodiments involve synthesizing image content depicting facial hair or other hair features based on orientation data obtained using guidance inputs or other user-provided guidance data. For instance, a graphic manipulation application accesses guidance data identifying a desired hair feature and an appearance exemplar having image data with color information for the desired hair feature. The graphic manipulation application transforms the guidance data into an input orientation map. The graphic manipulation application matches the input orientation map to an exemplar orientation map having a higher resolution than the input orientation map. The graphic manipulation application generates the desired hair feature by applying the color information from the appearance exemplar to the exemplar orientation map. The graphic manipulation application outputs the desired hair feature at a presentation device.
-
公开(公告)号:US10467822B2
公开(公告)日:2019-11-05
申请号:US15899503
申请日:2018-02-20
Applicant: Adobe Inc.
Inventor: Rinat Abdrashitov , Jose Ignacio Echevarria Vallespi , Jingwan Lu , Elya Shectman , Duygu Ceylan Aksit , David Simons
Abstract: Embodiments involve reducing collision-based defects in motion-stylizations. For example, a device obtains facial landmark data from video data. The facial landmark data includes a first trajectory traveled by a first point tracking one or more facial features, and a second trajectory traveled by a second point tracking one or more facial features. The device applies a motion-stylization to the facial landmark data that causes a first change to one or more of the first trajectory and the second trajectory. The device also identifies a new collision between the first and second points that is introduced by the first change. The device applies a modified stylization to the facial landmark data that causes a second change to one or more of the first trajectory and the second trajectory. If the new collision is removed by the second change, the device outputs the facial landmark data with the modified stylization applied.
-
公开(公告)号:US10368047B2
公开(公告)日:2019-07-30
申请号:US15433333
申请日:2017-02-15
Applicant: ADOBE INC.
Inventor: Zhili Chen , Duygu Ceylan Aksit , Jingwei Huang , Hailin Jin
IPC: H04N13/00 , H04N13/117 , H04N5/232 , G06F3/01 , H04N13/144 , H04N13/207 , H04N13/373 , H04N13/376 , H04N13/378 , H04N13/38 , H04N13/366 , G06T15/20 , H04N13/344
Abstract: A stereoscopic six-degree of freedom viewing experience with a monoscopic 360-degree video is provided. A monoscopic 360-degree video of a subject scene can be processed by analyzing each frame to recover a three-dimensional geometric representation, and recover a camera motion path. Utilizing the recovered three-dimensional geometric representation and camera motion path, a dense three-dimensional geometric representation of the subject scene is generated. The processed video can be provided for stereoscopic display via a device. As motion of the device is detected, novel viewpoints can be stereoscopically synthesized for presentation in real time, so as to provide an immersive virtual reality experience based on the original monoscopic 360-degree video and the detected motion of the device.
-
-
-
-
-
-
-
-
-