-
公开(公告)号:US20200334894A1
公开(公告)日:2020-10-22
申请号:US16388187
申请日:2019-04-18
Applicant: Adobe Inc.
Inventor: MAI LONG , Simon Niklaus , Jimei Yang
Abstract: Systems and methods are described for generating a three dimensional (3D) effect from a two dimensional (2D) image. The methods may include generating a depth map based on a 2D image, identifying a camera path, generating one or more extremal views based on the 2D image and the camera path, generating a global point cloud by inpainting occlusion gaps in the one or more extremal views, generating one or more intermediate views based on the global point cloud and the camera path, and combining the one or more extremal views and the one or more intermediate views to produce a 3D motion effect.
-
公开(公告)号:US20200226724A1
公开(公告)日:2020-07-16
申请号:US16246051
申请日:2019-01-11
Applicant: Adobe Inc.
Inventor: Chen Fang , Zhe Lin , Zhaowen Wang , Yulun Zhang , Yilin Wang , Jimei Yang
Abstract: In implementations of transferring image style to content of a digital image, an image editing system includes an encoder that extracts features from a content image and features from a style image. A whitening and color transform generates coarse features from the content and style features extracted by the encoder for one pass of encoding and decoding. Hence, the processing delay and memory requirements are low. A feature transfer module iteratively transfers style features to the coarse feature map and generates a fine feature map. The image editing system fuses the fine features with the coarse features, and a decoder generates an output image with content of the content image in a style of the style image from the fused features. Accordingly, the image editing system efficiently transfers an image style to image content in real-time, without undesirable artifacts in the output image.
-
公开(公告)号:US10546408B2
公开(公告)日:2020-01-28
申请号:US15926787
申请日:2018-03-20
Applicant: Adobe Inc.
Inventor: Jimei Yang , Duygu Ceylan , Ruben Villegas
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a motion synthesis neural network with a forward kinematics layer to generate a motion sequence for a target skeleton based on an initial motion sequence for an initial skeleton. In certain embodiments, the methods, non-transitory computer readable media, and systems use a motion synthesis neural network comprising an encoder recurrent neural network, a decoder recurrent neural network, and a forward kinematics layer to retarget motion sequences. To train the motion synthesis neural network to retarget such motion sequences, in some implementations, the disclosed methods, non-transitory computer readable media, and systems modify parameters of the motion synthesis neural network based on one or both of an adversarial loss and a cycle consistency loss.
-
公开(公告)号:US10460214B2
公开(公告)日:2019-10-29
申请号:US15799395
申请日:2017-10-31
Applicant: Adobe Inc.
Inventor: Xin Lu , Zhe Lin , Xiaohui Shen , Jimei Yang , Jianming Zhang , Jen-Chan Jeff Chien , Chenxi Liu
Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for segmenting objects in digital visual media utilizing one or more salient content neural networks. In particular, in one or more embodiments, the disclosed systems and methods train one or more salient content neural networks to efficiently identify foreground pixels in digital visual media. Moreover, in one or more embodiments, the disclosed systems and methods provide a trained salient content neural network to a mobile device, allowing the mobile device to directly select salient objects in digital visual media utilizing a trained neural network. Furthermore, in one or more embodiments, the disclosed systems and methods train and provide multiple salient content neural networks, such that mobile devices can identify objects in real-time digital visual media feeds (utilizing a first salient content neural network) and identify objects in static digital images (utilizing a second salient content neural network).
-
公开(公告)号:US20190295305A1
公开(公告)日:2019-09-26
申请号:US15926787
申请日:2018-03-20
Applicant: Adobe Inc.
Inventor: Jimei Yang , Duygu Ceylan , Ruben Villegas
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a motion synthesis neural network with a forward kinematics layer to generate a motion sequence for a target skeleton based on an initial motion sequence for an initial skeleton. In certain embodiments, the methods, non-transitory computer readable media, and systems use a motion synthesis neural network comprising an encoder recurrent neural network, a decoder recurrent neural network, and a forward kinematics layer to retarget motion sequences. To train the motion synthesis neural network to retarget such motion sequences, in some implementations, the disclosed methods, non-transitory computer readable media, and systems modify parameters of the motion synthesis neural network based on one or both of an adversarial loss and a cycle consistency loss.
-
公开(公告)号:US20190287283A1
公开(公告)日:2019-09-19
申请号:US15921998
申请日:2018-03-15
Applicant: Adobe Inc.
Inventor: Zhe Lin , Xin Lu , Xiaohui Shen , Jimei Yang , Jiahui Yu
Abstract: Certain embodiments involve using an image completion neural network to perform user-guided image completion. For example, an image editing application accesses an input image having a completion region to be replaced with new image content. The image editing application also receives a guidance input that is applied to a portion of a completion region. The image editing application provides the input image and the guidance input to an image completion neural network that is trained to perform image-completion operations using guidance input. The image editing application produces a modified image by replacing the completion region of the input image with the new image content generated with the image completion network. The image editing application outputs the modified image having the new image content.
-
公开(公告)号:US20240281978A1
公开(公告)日:2024-08-22
申请号:US18170336
申请日:2023-02-16
Applicant: Adobe Inc.
Inventor: Jingyuan Liu , Qing Liu , Jimei Yang , Yuhong Wu , Su Chen
CPC classification number: G06T7/11 , G06V10/267 , G06V10/7715 , G06V10/82 , G06V20/70 , G06T2207/20021 , G06T2207/20084
Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating segmentation masks for a digital visual media item. In particular, in one or more embodiments, the disclosed systems generate, utilizing a neural network encoder, high-level features of a digital visual media item. Further, the disclosed systems generate, utilizing the neural network encoder, low-level features of the digital visual media item. In some implementations, the disclosed systems generate, utilizing a neural network decoder, an initial segmentation mask of the digital visual media item from the low-level features. Moreover, the disclosed systems generate, utilizing the neural network decoder, a refined segmentation mask of the digital visual media item from the initial segmentation mask and the high-level features.
-
公开(公告)号:US20240144574A1
公开(公告)日:2024-05-02
申请号:US18397413
申请日:2023-12-27
Applicant: Adobe Inc.
Inventor: Jun Saito , Jimei Yang , Duygu Ceylan Aksit
Abstract: Digital object animation techniques are described. In a first example, translation-based animation of the digital object operates using control points of the digital object. In another example, the animation system is configured to minimize an amount of feature positions that are used to generate the animation. In a further example, an input pose is normalized through use of a global scale factor to address changes in a z-position of a subject in different digital images. Yet further, a body tracking module is used to computing initial feature positions. The initial feature positions are then used to initialize a face tracker module to generate feature positions of the face. The animation system also supports a plurality of modes used to generate the digital object, techniques to define a base of the digital object, and a friction term limiting movement of features positions based on contact with a ground plane.
-
公开(公告)号:US20240135572A1
公开(公告)日:2024-04-25
申请号:US18190636
申请日:2023-03-27
Applicant: Adobe Inc.
Inventor: Krishna Kumar Singh , Yijun Li , Jingwan Lu , Duygu Ceylan Aksit , Yangtuanfeng Wang , Jimei Yang , Tobias Hinz
CPC classification number: G06T7/70 , G06T7/40 , G06V10/44 , G06V10/771 , G06V10/806 , G06V10/82 , G06T2207/20081 , G06T2207/30196
Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
-
公开(公告)号:US11625881B2
公开(公告)日:2023-04-11
申请号:US17486269
申请日:2021-09-27
Applicant: Adobe Inc.
Inventor: Ruben Eduardo Villegas , Jun Saito , Jimei Yang , Duygu Ceylan Aksit
Abstract: Motion retargeting with kinematic constraints is implemented in a digital medium environment. Generally, the described techniques provide for retargeting motion data from a source motion sequence to a target visual object. Accordingly, the described techniques position a target visual object in a defined visual environment to identify kinematic constraints of the target object relative to the visual environment. Further, the described techniques utilize an iterative optimization process that fine tunes the conformance of retargeted motion of a target object to the identified kinematic constraints.
-
-
-
-
-
-
-
-
-