-
公开(公告)号:US20190279414A1
公开(公告)日:2019-09-12
申请号:US15915872
申请日:2018-03-08
Applicant: Adobe Inc.
Inventor: Duygu Ceylan Aksit , Yangtuanfeng Wang , Niloy Jyoti Mitra , Mehmet Ersin Yumer , Jovan Popovic
Abstract: Systems and techniques provide a user interface within an application to enable users to designate a folded object image of a folded object, as well as a superimposed image of a superimposed object to be added to the folded object image. Within the user interface, the user may simply place the superimposed image over the folded object image to obtain the desired modified image. If the user places the superimposed image over one or more folds of the folded object image, portions of the superimposed image will be removed to create the illusion in the modified image that the removed portions are obscured by one or more folds.
-
22.
公开(公告)号:US20190259214A1
公开(公告)日:2019-08-22
申请号:US15899503
申请日:2018-02-20
Applicant: Adobe Inc.
Inventor: Rinat Abdrashitov , Jose Ignacio Echevarria Vallespi , Jingwan Lu , Elya Shectman , Duygu Ceylan Aksit , David Simons
Abstract: Embodiments involve reducing collision-based defects in motion-stylizations. For example, a device obtains facial landmark data from video data. The facial landmark data includes a first trajectory traveled by a first point tracking one or more facial features, and a second trajectory traveled by a second point tracking one or more facial features. The device applies a motion-stylization to the facial landmark data that causes a first change to one or more of the first trajectory and the second trajectory. The device also identifies a new collision between the first and second points that is introduced by the first change. The device applies a modified stylization to the facial landmark data that causes a second change to one or more of the first trajectory and the second trajectory. If the new collision is removed by the second change, the device outputs the facial landmark data with the modified stylization applied.
-
公开(公告)号:US12165260B2
公开(公告)日:2024-12-10
申请号:US17715646
申请日:2022-04-07
Applicant: Adobe Inc. , University College London
Inventor: Duygu Ceylan Aksit , Yangtuanfeng Wang , Niloy J. Mitra , Meng Zhang
Abstract: Systems and methods are described for rendering garments. The system includes a first machine learning model trained to generate coarse garment templates of a garment and a second machine learning model trained to render garment images. The first machine learning model generates a coarse garment template based on position data. The system produces a neural texture for the garment, the neural texture comprising a multi-dimensional feature map characterizing detail of the garment. The system provides the coarse garment template and the neural texture to the second machine learning model trained to render garment images. The second machine learning model generates a rendered garment image of the garment based on the coarse garment template of the garment and the neural texture.
-
公开(公告)号:US20240378809A1
公开(公告)日:2024-11-14
申请号:US18316490
申请日:2023-05-12
Applicant: Adobe Inc.
Inventor: Yangtuanfeng Wang , Yi Zhou , Yasamin Jafarian , Nathan Aaron Carr , Jimei Yang , Duygu Ceylan Aksit
IPC: G06T17/20
Abstract: Decal application techniques as implemented by a computing device are described to perform decaling of a digital image. In one example, learned features of a digital image using machine learning are used by a computing device as a basis to predict the surface geometry of an object in the digital image. Once the surface geometry of the object is predicted, machine learning techniques are then used by the computing device to configure an overlay object to be applied onto the digital image according to the predicted surface geometry of the overlaid object.
-
公开(公告)号:US12067680B2
公开(公告)日:2024-08-20
申请号:US17816813
申请日:2022-08-02
Applicant: ADOBE INC.
Inventor: Jimei Yang , Chun-han Yao , Duygu Ceylan Aksit , Yi Zhou
IPC: G06T17/20
CPC classification number: G06T17/20
Abstract: Systems and methods for mesh generation are described. One aspect of the systems and methods includes receiving an image depicting a visible portion of a body; generating an intermediate mesh representing the body based on the image; generating visibility features indicating whether parts of the body are visible based on the image; generating parameters for a morphable model of the body based on the intermediate mesh and the visibility features; and generating an output mesh representing the body based on the parameters for the morphable model, wherein the output mesh includes a non-visible portion of the body that is not depicted by the image.
-
公开(公告)号:US20240169553A1
公开(公告)日:2024-05-23
申请号:US18057436
申请日:2022-11-21
Applicant: Adobe Inc.
Inventor: Jae shin Yoon , Zhixin Shu , Yangtuanfeng Wang , Jingwan Lu , Jimei Yang , Duygu Ceylan Aksit
CPC classification number: G06T7/20 , G06T13/40 , G06T15/04 , G06T17/00 , G06T2207/10016 , G06T2207/20081 , G06T2207/20084 , G06T2207/30244
Abstract: Techniques for modeling secondary motion based on three-dimensional models are described as implemented by a secondary motion modeling system, which is configured to receive a plurality of three-dimensional object models representing an object. Based on the three-dimensional object models, the secondary motion modeling system determines three-dimensional motion descriptors of a particular three-dimensional object model using one or more machine learning models. Based on the three-dimensional motion descriptors, the secondary motion modeling system models at least one feature subjected to secondary motion using the one or more machine learning models. The particular three-dimensional object model having the at least one feature is rendered by the secondary motion modeling system.
-
27.
公开(公告)号:US20240135513A1
公开(公告)日:2024-04-25
申请号:US18190654
申请日:2023-03-27
Applicant: Adobe Inc.
Inventor: Krishna Kumar Singh , Yijun Li , Jingwan Lu , Duygu Ceylan Aksit , Yangtuanfeng Wang , Jimei Yang , Tobias Hinz
CPC classification number: G06T5/005 , G06T3/0093 , G06T7/40 , G06T7/70 , G06V10/44 , G06V10/771 , G06V10/806 , G06V10/82 , G06T2207/30196
Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
-
公开(公告)号:US11769279B2
公开(公告)日:2023-09-26
申请号:US17317246
申请日:2021-05-11
Applicant: Adobe Inc.
Inventor: Giorgio Gori , Tamy Boubekeur , Radomir Mech , Nathan Aaron Carr , Matheus Abrantes Gadelha , Duygu Ceylan Aksit
CPC classification number: G06T11/203 , G06N7/01 , G06N20/00 , G06T9/00 , G06T2200/24
Abstract: Generative shape creation and editing is leveraged in a digital medium environment. An object editor system represents a set of training shapes as sets of visual elements known as “handles,” and converts sets of handles into signed distance field (SDF) representations. A handle processor model is then trained using the SDF representations to enable the handle processor model to generate new shapes that reflect salient visual features of the training shapes. The trained handle processor model, for instance, generates new sets of handles based on salient visual features learned from the training handle set. Thus, utilizing the described techniques, accurate characterizations of a set of shapes can be learned and used to generate new shapes. Further, generated shapes can be edited and transformed in different ways.
-
公开(公告)号:US11704865B2
公开(公告)日:2023-07-18
申请号:US17383294
申请日:2021-07-22
Applicant: Adobe Inc.
Inventor: Ruben Villegas , Yunseok Jang , Duygu Ceylan Aksit , Jimei Yang , Xin Sun
Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate realistic shading for three-dimensional objects inserted into digital images. The disclosed system utilizes a light encoder neural network to generate a representation embedding of lighting in a digital image. Additionally, the disclosed system determines points of the three-dimensional object visible within a camera view. The disclosed system generates a self-occlusion map for the digital three-dimensional object by determining whether fixed sets of rays uniformly sampled from the points intersects with the digital three-dimensional object. The disclosed system utilizes a generator neural network to determine a shading map for the digital three-dimensional object based on the representation embedding of lighting in the digital image and the self-occlusion map. Additionally, the disclosed system generates a modified digital image with the three-dimensional object inserted into the digital image with consistent lighting of the three-dimensional object and the digital image.
-
公开(公告)号:US20230123820A1
公开(公告)日:2023-04-20
申请号:US17502714
申请日:2021-10-15
Applicant: Adobe Inc.
Inventor: Yangtuanfeng Wang , Duygu Ceylan Aksit , Krishna Kumar Singh , Niloy J Mitra
Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and method that utilize a character animation neural network informed by motion and pose signatures to generate a digital video through person-specific appearance modeling and motion retargeting. In particular embodiments, the disclosed systems implement a character animation neural network that includes a pose embedding model to encode a pose signature into spatial pose features. The character animation neural network further includes a motion embedding model to encode a motion signature into motion features. In some embodiments, the disclosed systems utilize the motion features to refine per-frame pose features and improve temporal coherency. In certain implementations, the disclosed systems also utilize the motion features to demodulate neural network weights used to generate an image frame of a character in motion based on the refined pose features.
-
-
-
-
-
-
-
-
-