-
公开(公告)号:US20240135511A1
公开(公告)日:2024-04-25
申请号:US18190544
申请日:2023-03-27
Applicant: Adobe Inc.
Inventor: Krishna Kumar Singh , Yijun Li , Jingwan Lu , Duygu Ceylan Aksit , Yangtuanfeng Wang , Jimei Yang , Tobias Hinz , Qing Liu , Jianming Zhang , Zhe Lin
CPC classification number: G06T5/005 , G06V10/25 , G06V10/44 , G06V10/82 , G06T2207/30196
Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
-
公开(公告)号:US11861779B2
公开(公告)日:2024-01-02
申请号:US17550432
申请日:2021-12-14
Applicant: Adobe Inc.
Inventor: Jun Saito , Jimei Yang , Duygu Ceylan Aksit
Abstract: Digital object animation techniques are described. In a first example, translation-based animation of the digital object operates using control points of the digital object. In another example, the animation system is configured to minimize an amount of feature positions that are used to generate the animation. In a further example, an input pose is normalized through use of a global scale factor to address changes in a z-position of a subject in different digital images. Yet further, a body tracking module is used to computing initial feature positions. The initial feature positions are then used to initialize a face tracker module to generate feature positions of the face. The animation system also supports a plurality of modes used to generate the digital object, techniques to define a base of the digital object, and a friction term limiting movement of features positions based on contact with a ground plane.
-
公开(公告)号:US11830138B2
公开(公告)日:2023-11-28
申请号:US17206813
申请日:2021-03-19
Applicant: ADOBE INC.
Inventor: Duygu Ceylan Aksit , Mianlun Zheng , Yi Zhou
Abstract: Various disclosed embodiments are directed to estimating that a first vertex of a patch will change from a first position to a second position (the second position being at least partially indicative of secondary motion) based at least in part on one or more features of: primary motion data, one or more material properties, and constraint data associated with the particular patch. Such estimation can be made for some or all of the patches of an entire volumetric mesh in order to accurately predict the overall secondary motion of an object. This, among other functionality described herein resolves the inaccuracies, computer resource consumption, and the user experience of existing technologies.
-
公开(公告)号:US20230326137A1
公开(公告)日:2023-10-12
申请号:US17715646
申请日:2022-04-07
Applicant: Adobe Inc. , University College London
Inventor: Duygu Ceylan Aksit , Yangtuanfeng Wang , Niloy J. Mitra , Meng Zhang
CPC classification number: G06T17/20 , G06T15/04 , G06T7/70 , G06V10/7515 , G06T2207/20084 , G06T2207/20081 , G06T2210/16
Abstract: Systems and methods are described for rendering garments. The system includes a first machine learning model trained to generate coarse garment templates of a garment and a second machine learning model trained to render garment images. The first machine learning model generates a coarse garment template based on position data. The system produces a neural texture for the garment, the neural texture comprising a multi-dimensional feature map characterizing detail of the garment. The system provides the coarse garment template and the neural texture to the second machine learning model trained to render garment images. The second machine learning model generates a rendered garment image of the garment based on the coarse garment template of the garment and the neural texture.
-
公开(公告)号:US11531697B2
公开(公告)日:2022-12-20
申请号:US17087982
申请日:2020-11-03
Applicant: Adobe Inc.
Inventor: Jinrong Xie , Shabnam Ghadar , Jun Saito , Jimei Yang , Elnaz Morad , Duygu Ceylan Aksit , Baldo Faieta , Alex Filipkowski
IPC: G06F16/55 , G06F16/538 , G06F16/583 , G06F16/56 , G06F16/535 , G06N3/08 , G06T7/73
Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly identifying and providing digital images of human figures in poses corresponding to a query pose. In particular, the disclosed systems can provide multiple approaches to searching for and providing pose images, including identifying a digital image depicting a human figure in a particular pose based on a query digital image that depicts the pose or identifying a digital image depicting a human figure in a particular pose based on a virtual mannequin. Indeed, the disclosed systems can provide a manipulable virtual mannequin that defines a query pose for searching a repository of digital images. Additionally, the disclosed systems can generate and provide digital pose image groups by clustering digital images together according to poses of human figures within a pose feature space.
-
公开(公告)号:US20220138249A1
公开(公告)日:2022-05-05
申请号:US17087982
申请日:2020-11-03
Applicant: Adobe Inc.
Inventor: Jinrong Xie , Shabnam Ghadar , Jun Saito , Jimei Yang , Elnaz Morad , Duygu Ceylan Aksit , Baldo Faieta , Alex Filipkowski
IPC: G06F16/55 , G06F16/538 , G06N3/08 , G06F16/535 , G06T7/73 , G06F16/56 , G06F16/583
Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly identifying and providing digital images of human figures in poses corresponding to a query pose. In particular, the disclosed systems can provide multiple approaches to searching for and providing pose images, including identifying a digital image depicting a human figure in a particular pose based on a query digital image that depicts the pose or identifying a digital image depicting a human figure in a particular pose based on a virtual mannequin. Indeed, the disclosed systems can provide a manipulable virtual mannequin that defines a query pose for searching a repository of digital images. Additionally, the disclosed systems can generate and provide digital pose image groups by clustering digital images together according to poses of human figures within a pose feature space.
-
公开(公告)号:US11037341B1
公开(公告)日:2021-06-15
申请号:US16744105
申请日:2020-01-15
Applicant: Adobe Inc.
Inventor: Giorgio Gori , Tamy Boubekeur , Radomir Mech , Nathan Aaron Carr , Matheus Abrantes Gadelha , Duygu Ceylan Aksit
Abstract: Generative shape creation and editing is leveraged in a digital medium environment. An object editor system represents a set of training shapes as sets of visual elements known as “handles,” and converts sets of handles into signed distance field (SDF) representations. A handle processor model is then trained using the SDF representations to enable the handle processor model to generate new shapes that reflect salient visual features of the training shapes. The trained handle processor model, for instance, generates new sets of handles based on salient visual features learned from the training handle set. Thus, utilizing the described techniques, accurate characterizations of a set of shapes can be learned and used to generate new shapes. Further, generated shapes can be edited and transformed in different ways.
-
公开(公告)号:US10607065B2
公开(公告)日:2020-03-31
申请号:US15970831
申请日:2018-05-03
Applicant: Adobe Inc.
Inventor: Rebecca Ilene Milman , Jose Ignacio Echevarria Vallespi , Jingwan Lu , Elya Shechtman , Duygu Ceylan Aksit , David P. Simons
Abstract: Generation of parameterized avatars is described. An avatar generation system uses a trained machine-learning model to generate a parameterized avatar, from which digital visual content (e.g., images, videos, augmented and/or virtual reality (AR/VR) content) can be generated. The machine-learning model is trained to identify cartoon features of a particular style—from a library of these cartoon features—that correspond to features of a person depicted in a digital photograph. The parameterized avatar is data (e.g., a feature vector) that indicates the cartoon features identified from the library by the trained machine-learning model for the depicted person. This parameterization enables the avatar to be animated. The parameterization also enables the avatar generation system to generate avatars in non-photorealistic (relatively cartoony) styles such that, despite the style, the avatars preserve identities and expressions of persons depicted in input digital photographs.
-
公开(公告)号:US10600239B2
公开(公告)日:2020-03-24
申请号:US15877142
申请日:2018-01-22
Applicant: ADOBE INC.
Inventor: Jeong Joon Park , Zhili Chen , Xin Sun , Vladimir Kim , Kalyan Krishna Sunkavalli , Duygu Ceylan Aksit
Abstract: Matching an illumination of an embedded virtual object (VO) with current environment illumination conditions provides an enhanced immersive experience to a user. To match the VO and environment illuminations, illumination basis functions are determined based on preprocessing image data, captured as a first combination of intensities of direct illumination sources illuminates the environment. Each basis function corresponds to one of the direct illumination sources. During the capture of runtime image data, a second combination of intensities illuminates the environment. An illumination-weighting vector is determined based on the runtime image data. The determination of the weighting vector accounts for indirect illumination sources, such as surface reflections. The weighting vector encodes a superposition of the basis functions that corresponds to the second combination of intensities. The method illuminates the VO based on the weighting vector. The resulting illumination of the VO matches the second combination of the intensities and surface reflections.
-
50.
公开(公告)号:US20190295272A1
公开(公告)日:2019-09-26
申请号:US15928520
申请日:2018-03-22
Applicant: Adobe Inc.
Inventor: Duygu Ceylan Aksit , Zhili Chen , Jose Ignacio Echevarria Vallespi , Kyle Olszewski
Abstract: Certain embodiments involve synthesizing image content depicting facial hair or other hair features based on orientation data obtained using guidance inputs or other user-provided guidance data. For instance, a graphic manipulation application accesses guidance data identifying a desired hair feature and an appearance exemplar having image data with color information for the desired hair feature. The graphic manipulation application transforms the guidance data into an input orientation map. The graphic manipulation application matches the input orientation map to an exemplar orientation map having a higher resolution than the input orientation map. The graphic manipulation application generates the desired hair feature by applying the color information from the appearance exemplar to the exemplar orientation map. The graphic manipulation application outputs the desired hair feature at a presentation device.
-
-
-
-
-
-
-
-
-