GENERATING SYNTHESIZED DIGITAL IMAGES UTILIZING A MULTI-RESOLUTION GENERATOR NEURAL NETWORK

    公开(公告)号:US20230053588A1

    公开(公告)日:2023-02-23

    申请号:US17400426

    申请日:2021-08-12

    Applicant: Adobe Inc.

    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate synthetized digital images via multi-resolution generator neural networks. The disclosed system extracts multi-resolution features from a scene representation to condition a spatial feature tensor and a latent code to modulate an output of a generator neural network. For example, the disclosed systems utilizes a base encoder of the generator neural network to generate a feature set from a semantic label map of a scene. The disclosed system then utilizes a bottom-up encoder to extract multi-resolution features and generate a latent code from the feature set. Furthermore, the disclosed system determines a spatial feature tensor by utilizing a top-down encoder to up-sample and aggregate the multi-resolution features. The disclosed system then utilizes a decoder to generate a synthesized digital image based on the spatial feature tensor and the latent code.

    SYNTHESIZING DIGITAL IMAGES UTILIZING IMAGE-GUIDED MODEL INVERSION OF AN IMAGE CLASSIFIER

    公开(公告)号:US20220261972A1

    公开(公告)日:2022-08-18

    申请号:US17178681

    申请日:2021-02-18

    Applicant: Adobe Inc.

    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that utilize image-guided model inversion of an image classifier with a discriminator. The disclosed systems utilize a neural network image classifier to encode features of an initial image and a target image. The disclosed system also reduces a feature distance between the features of the initial image and the features of the target image at a plurality of layers of the neural network image classifier by utilizing a feature distance regularizer. Additionally, the disclosed system reduces a patch difference between image patches of the initial image and image patches of the target image by utilizing a patch-based discriminator with a patch consistency regularizer. The disclosed system then generates a synthesized digital image based on the constrained feature set and constrained image patches of the initial image.

    DETAIL-PRESERVING IMAGE EDITING TECHNIQUES

    公开(公告)号:US20220122307A1

    公开(公告)日:2022-04-21

    申请号:US17468511

    申请日:2021-09-07

    Applicant: Adobe Inc.

    Abstract: Systems and methods combine an input image with an edited image generated using a generator neural network to preserve detail from the original image. A computing system provides an input image to a machine learning model to generate a latent space representation of the input image. The system provides the latent space representation to a generator neural network to generate a generated image. The system generates multiple scale representations of the input image, as well as multiple scale representations of the generated image. The system generates a first combined image based on first scale representations of the images and a first value. The system generates a second combined image based on second scale representations of the images and a second value. The system blends the first combined image with the second combined image to generate an output image.

    Few-shot Image Generation Via Self-Adaptation

    公开(公告)号:US20220076374A1

    公开(公告)日:2022-03-10

    申请号:US17013332

    申请日:2020-09-04

    Applicant: Adobe Inc.

    Abstract: One example method involves operations for receiving a request to transform an input image into a target image. Operations further include providing the input image to a machine learning model trained to adapt images. Training the machine learning model includes accessing training data having a source domain of images and a target domain of images with a target style. Training further includes using a pre-trained generative model to generate an adapted source domain of adapted images having the target style. The adapted source domain is generated by determining a rate of change for parameters of the target style, generating weighted parameters by applying a weight to each of the parameters based on their respective rate of change, and applying the weighted parameters to the source domain. Additionally, operations include using the machine learning model to generate the target image by modifying parameters of the input image using the target style.

    Interactive color palette interface for digital painting

    公开(公告)号:US11087503B2

    公开(公告)日:2021-08-10

    申请号:US16448127

    申请日:2019-06-21

    Applicant: Adobe Inc.

    Abstract: An interactive palette interface includes a color picker for digital paint applications. A user can create, modify and select colors for creating digital artwork using the interactive palette interface. The interactive palette interface includes a mixing dish in which colors can be added, removed and rearranged to blend together to create gradients and gamuts. The mixing dish is a digital simulation of a physical palette on which an artist adds and mixes various colors of paint before applying the paint to the artwork. Color blobs, which are logical groups of pixels in the mixing dish, can be spatially rearranged and scaled by a user to create and explore different combinations of colors. The color, position and size of each blob influences the color of other pixels in the mixing dish. Edits to the mixing dish are non-destructive, and an infinite history of color combinations is preserved.

    Generating realistic animations for digital animation characters utilizing a generative adversarial network and a hip motion prediction network

    公开(公告)号:US10964084B2

    公开(公告)日:2021-03-30

    申请号:US16451813

    申请日:2019-06-25

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a digital animation of a digital animation character by utilizing a generative adversarial network and a hip motion prediction network. For example, the disclosed systems can utilize an unconditional generative adversarial network to generate a sequence of local poses of a digital animation character based on an input of a random code vector. The disclosed systems can also utilize a conditional generative adversarial network to generate a sequence of local poses based on an input of a set of keyframes. Based on the sequence of local poses, the disclosed systems can utilize a hip motion prediction network to generate a sequence of global poses based on hip velocities. In addition, the disclosed systems can generate an animation of a digital animation character based on the sequence of global poses.

Patent Agency Ranking