PREDICTING SECONDARY MOTION OF MULTIDIMENTIONAL OBJECTS BASED ON LOCAL PATCH FEATURES

    公开(公告)号:US20220301262A1

    公开(公告)日:2022-09-22

    申请号:US17206813

    申请日:2021-03-19

    Applicant: ADOBE INC.

    Abstract: Various disclosed embodiments are directed to estimating that a first vertex of a patch will change from a first position to a second position (the second position being at least partially indicative of secondary motion) based at least in part on one or more features of: primary motion data, one or more material properties, and constraint data associated with the particular patch. Such estimation can be made for some or all of the patches of an entire volumetric mesh in order to accurately predict the overall secondary motion of an object. This, among other functionality described herein resolves the inaccuracies, computer resource consumption, and the user experience of existing technologies.

    FITTING 3D PRIMITIVES TO A HIGH-RESOLUTION POINT CLOUD

    公开(公告)号:US20220292765A1

    公开(公告)日:2022-09-15

    申请号:US17201783

    申请日:2021-03-15

    Applicant: ADOBE INC.

    Abstract: Embodiments provide systems, methods, and computer storage media for fitting 3D primitives to a 3D point cloud. In an example embodiment, 3D primitives are fit to a 3D point cloud using a global primitive fitting network that evaluates the entire 3D point cloud and a local primitive fitting network that evaluates local patches of the 3D point cloud. The global primitive fitting network regresses a representation of larger (global) primitives that fit the global structure. To identify smaller 3D primitives for regions with fine detail, local patches are constructed by sampling from a pool of points likely to contain fine detail, and the local primitive fitting network regresses a representation of smaller (local) primitives that fit the local structure of each of the local patches. The global and local primitives are merged into a combined, multi-scale set of fitted primitives, and representative primitive parameters are computed for each fitted primitive.

    Motion Retargeting with Kinematic Constraints

    公开(公告)号:US20220020199A1

    公开(公告)日:2022-01-20

    申请号:US17486269

    申请日:2021-09-27

    Applicant: Adobe Inc.

    Abstract: Motion retargeting with kinematic constraints is implemented in a digital medium environment. Generally, the described techniques provide for retargeting motion data from a source motion sequence to a target visual object. Accordingly, the described techniques position a target visual object in a defined visual environment to identify kinematic constraints of the target object relative to the visual environment. Further, the described techniques utilize an iterative optimization process that fine tunes the conformance of retargeted motion of a target object to the identified kinematic constraints.

    Motion retargeting with kinematic constraints

    公开(公告)号:US11170551B1

    公开(公告)日:2021-11-09

    申请号:US16864724

    申请日:2020-05-01

    Applicant: Adobe Inc.

    Abstract: Motion retargeting with kinematic constraints is implemented in a digital medium environment. Generally, the described techniques provide for retargeting motion data from a source motion sequence to a target visual object. Accordingly, the described techniques position a target visual object in a defined visual environment to identify kinematic constraints of the target object relative to the visual environment. Further, the described techniques utilize an iterative optimization process that fine tunes the conformance of retargeted motion of a target object to the identified kinematic constraints.

    Generating novel views of a three-dimensional object based on a single two-dimensional image

    公开(公告)号:US11115645B2

    公开(公告)日:2021-09-07

    申请号:US16230872

    申请日:2018-12-21

    Applicant: ADOBE INC.

    Abstract: Embodiments are directed towards providing a target view, from a target viewpoint, of a 3D object. A source image, from a source viewpoint and including a common portion of the object, is encoded in 2D data. An intermediate image that includes an intermediate view of the object is generated based on the data. The intermediate view is from the target viewpoint and includes the common portion of the object and a disoccluded portion of the object not visible in the source image. The intermediate image includes a common region and a disoccluded region corresponding to the disoccluded portion of the object. The disoccluded region is updated to include a visual representation of a prediction of the disoccluded portion of the object. The prediction is based on a trained image completion model. The target view is based on the common region and the updated disoccluded region of the intermediate image.

    Intuitive editing of three-dimensional models

    公开(公告)号:US10957117B2

    公开(公告)日:2021-03-23

    申请号:US16204980

    申请日:2018-11-29

    Applicant: ADOBE INC.

    Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.

    Realistically illuminated virtual objects embedded within immersive environments

    公开(公告)号:US10950038B2

    公开(公告)日:2021-03-16

    申请号:US16800783

    申请日:2020-02-25

    Applicant: ADOBE INC.

    Abstract: Matching an illumination of an embedded virtual object (VO) with current environment illumination conditions provides an enhanced immersive experience to a user. To match the VO and environment illuminations, illumination basis functions are determined based on preprocessing image data, captured as a first combination of intensities of direct illumination sources illuminates the environment. Each basis function corresponds to one of the direct illumination sources. During the capture of runtime image data, a second combination of intensities illuminates the environment. An illumination-weighting vector is determined based on the runtime image data. The determination of the weighting vector accounts for indirect illumination sources, such as surface reflections. The weighting vector encodes a superposition of the basis functions that corresponds to the second combination of intensities. The method illuminates the VO based on the weighting vector. The resulting illumination of the VO matches the second combination of the intensities and surface reflections.

Patent Agency Ranking