-
公开(公告)号:US10916054B2
公开(公告)日:2021-02-09
申请号:US16184149
申请日:2018-11-08
Applicant: Adobe Inc.
Inventor: Duygu Ceylan Aksit , Weiyue Wang , Radomir Mech
Abstract: Techniques are disclosed for deforming a 3D source mesh to resemble a target object representation which may be a 2D image or another 3D mesh. A methodology implementing the techniques according to an embodiment includes extracting a set of one or more source features from a source 3D mesh. The source 3D mesh includes a plurality of source points representing a source object, and the extracting of the set of source features is independent of an ordering of the source points. The method also includes extracting a set of one or more target features from the target object representation, and decoding a concatenation of the set of source features and the set of target features to predict vertex offsets for application to the source 3D mesh to generate a deformed 3D mesh based on the target object. The feature extractions and the vertex offset predictions may employ Deep Neural Networks.
-
公开(公告)号:US20200151952A1
公开(公告)日:2020-05-14
申请号:US16184149
申请日:2018-11-08
Applicant: Adobe Inc.
Inventor: Duygu Ceylan Aksit , Weiyue Wang , Radomir Mech
Abstract: Techniques are disclosed for deforming a 3D source mesh to resemble a target object representation which may be a 2D image or another 3D mesh. A methodology implementing the techniques according to an embodiment includes extracting a set of one or more source features from a source 3D mesh. The source 3D mesh includes a plurality of source points representing a source object, and the extracting of the set of source features is independent of an ordering of the source points. The method also includes extracting a set of one or more target features from the target object representation, and decoding a concatenation of the set of source features and the set of target features to predict vertex offsets for application to the source 3D mesh to generate a deformed 3D mesh based on the target object. The feature extractions and the vertex offset predictions may employ Deep Neural Networks.
-
公开(公告)号:US20190340419A1
公开(公告)日:2019-11-07
申请号:US15970831
申请日:2018-05-03
Applicant: Adobe Inc.
Inventor: Rebecca Ilene Milman , Jose Ignacio Echevarria Vallespi , Jingwan Lu , Elya Shechtman , Duygu Ceylan Aksit , David P. Simons
Abstract: Generation of parameterized avatars is described. An avatar generation system uses a trained machine-learning model to generate a parameterized avatar, from which digital visual content (e.g., images, videos, augmented and/or virtual reality (AR/VR) content) can be generated. The machine-learning model is trained to identify cartoon features of a particular style—from a library of these cartoon features—that correspond to features of a person depicted in a digital photograph. The parameterized avatar is data (e.g., a feature vector) that indicates the cartoon features identified from the library by the trained machine-learning model for the depicted person. This parameterization enables the avatar to be animated. The parameterization also enables the avatar generation system to generate avatars in non-photorealistic (relatively cartoony) styles such that, despite the style, the avatars preserve identities and expressions of persons depicted in input digital photographs.
-
公开(公告)号:US20190228567A1
公开(公告)日:2019-07-25
申请号:US15877142
申请日:2018-01-22
Applicant: ADOBE INC.
Inventor: Jeong Joon Park , Zhili Chen , Xin Sun , Vladimir Kim , Kalyan Krishna Sunkavalli , Duygu Ceylan Aksit
Abstract: Matching an illumination of an embedded virtual object (VO) with current environment illumination conditions provides an enhanced immersive experience to a user. To match the VO and environment illuminations, illumination basis functions are determined based on preprocessing image data, captured as a first combination of intensities of direct illumination sources illuminates the environment. Each basis function corresponds to one of the direct illumination sources. During the capture of runtime image data, a second combination of intensities illuminates the environment. An illumination-weighting vector is determined based on the runtime image data. The determination of the weighting vector accounts for indirect illumination sources, such as surface reflections. The weighting vector encodes a superposition of the basis functions that corresponds to the second combination of intensities. The method illuminates the VO based on the weighting vector. The resulting illumination of the VO matches the second combination of the intensities and surface reflections.
-
公开(公告)号:US10297088B2
公开(公告)日:2019-05-21
申请号:US15716450
申请日:2017-09-26
Applicant: Adobe Inc.
Inventor: Tenell Rhodes , Gavin S. P. Miller , Duygu Ceylan Aksit , Daichi Ito
IPC: G06T19/00 , G06F3/0354 , G06F3/038 , G06F3/03 , G06F3/0346
Abstract: The present disclosure includes systems, methods, computer readable media, and devices that can generate accurate augmented reality objects based on tracking a writing device in relation to a real-world surface. In particular, the systems and methods described herein can detect an initial location of a writing device, and further track movement of the writing device on a real-world surface based on one or more sensory inputs. For example, disclosed systems and methods can generate an augmented reality object based on pressure detected at a tip of a writing device, based on orientation of the writing device, based on motion detector elements of the writing device (e.g., reflective materials, emitters, or object tracking shapes), and/or optical sensors. The systems and methods further render augmented reality objects within an augmented reality environment that appear on the real-world surface based on tracking the movement of the writing device.
-
36.
公开(公告)号:US20190096129A1
公开(公告)日:2019-03-28
申请号:US15716450
申请日:2017-09-26
Applicant: Adobe Inc.
Inventor: Tenell Rhodes , Gavin S.P. Miller , Duygu Ceylan Aksit , Daichi Ito
IPC: G06T19/00 , G06F3/0354 , G06F3/038
Abstract: The present disclosure includes systems, methods, computer readable media, and devices that can generate accurate augmented reality objects based on tracking a writing device in relation to a real-world surface. In particular, the systems and methods described herein can detect an initial location of a writing device, and further track movement of the writing device on a real-world surface based on one or more sensory inputs. For example, disclosed systems and methods can generate an augmented reality object based on pressure detected at a tip of a writing device, based on orientation of the writing device, based on motion detector elements of the writing device (e.g., reflective materials, emitters, or object tracking shapes), and/or optical sensors. The systems and methods further render augmented reality objects within an augmented reality environment that appear on the real-world surface based on tracking the movement of the writing device.
-
公开(公告)号:US12260530B2
公开(公告)日:2025-03-25
申请号:US18190544
申请日:2023-03-27
Applicant: Adobe Inc.
Inventor: Krishna Kumar Singh , Yijun Li , Jingwan Lu , Duygu Ceylan Aksit , Yangtuanfeng Wang , Jimei Yang , Tobias Hinz , Qing Liu , Jianming Zhang , Zhe Lin
Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
-
公开(公告)号:US20250095172A1
公开(公告)日:2025-03-20
申请号:US18369958
申请日:2023-09-19
Applicant: Adobe Inc.
Inventor: Sanjeev Muralikrishnan , Chun-Hao Huang , Duygu Ceylan Aksit , Niloy J. Mitra
Abstract: In some examples, a computing system access a set of registered three-dimensional (3D) digital shapes. The set of registered 3D digital shapes are registered to a shape template. The computing system determines a linear model for an estimate of the shape space using a first subset of the set of registered 3D digital shapes. The computing system then determines a nonlinear deformation model for the shape space using a second subset of the set of registered 3D digital shapes. An unregistered shape can be registered to the shape space using the linear model and the nonlinear deformation model. The registration can be added to the set of registered 3D digital shapes to update the estimate of the shape space if a shape distance between the registration and the unregistered shape is below a threshold value.
-
公开(公告)号:US12033261B2
公开(公告)日:2024-07-09
申请号:US17385559
申请日:2021-07-26
Applicant: Adobe Inc.
Inventor: Ruben Villegas , Jun Saito , Jimei Yang , Duygu Ceylan Aksit , Aaron Hertzmann
Abstract: One example method involves a processing device that performs operations that include receiving a request to retarget a source motion into a target object. Operations further include providing the target object to a contact-aware motion retargeting neural network trained to retarget the source motion into the target object. The contact-aware motion retargeting neural network is trained by accessing training data that includes a source object performing the source motion. The contact-aware motion retargeting neural network generates retargeted motion for the target object, based on a self-contact having a pair of input vertices. The retargeted motion is subject to motion constraints that: (i) preserve a relative location of the self-contact and (ii) prevent self-penetration of the target object.
-
40.
公开(公告)号:US20240135512A1
公开(公告)日:2024-04-25
申请号:US18190556
申请日:2023-03-27
Applicant: Adobe Inc.
Inventor: Krishna Kumar Singh , Yijun Li , Jingwan Lu , Duygu Ceylan Aksit , Yangtuanfeng Wang , Jimei Yang , Tobias Hinz , Qing Liu , Jianming Zhang , Zhe Lin
CPC classification number: G06T5/005 , G06T7/11 , G06V10/82 , G06V40/10 , G06T2207/20021 , G06T2207/20084 , G06T2207/20212 , G06T2207/30196
Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
-
-
-
-
-
-
-
-
-