Texture interpolation using neural networks

    公开(公告)号:US10818043B1

    公开(公告)日:2020-10-27

    申请号:US16392968

    申请日:2019-04-24

    Applicant: Adobe Inc.

    Abstract: An example method for neural network based interpolation of image textures includes training a global encoder network to generate global latent vectors based on training texture images, and training a local encoder network to generate local latent tensors based on the training texture images. The example method further includes interpolating between the global latent vectors associated with each set of training images, and interpolating between the local latent tensors associated with each set of training images. The example method further includes training a decoder network to generate reconstructions of the training texture images and to generate an interpolated texture based on the interpolated global latent vectors and the interpolated local latent tensors. The training of the encoder and decoder networks is based on a minimization of a loss function of the reconstructions and a minimization of a loss function of the interpolated texture.

    GENERATING MODIFIED DIGITAL IMAGES USING DEEP VISUAL GUIDED PATCH MATCH MODELS FOR IMAGE INPAINTING

    公开(公告)号:US20250139748A1

    公开(公告)日:2025-05-01

    申请号:US19011235

    申请日:2025-01-06

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating modified digital images utilizing a guided inpainting approach that implements a patch match model informed by a deep visual guide. In particular, the disclosed systems can utilize a visual guide algorithm to automatically generate guidance maps to help identify replacement pixels for inpainting regions of digital images utilizing a patch match model. For example, the disclosed systems can generate guidance maps in the form of structure maps, depth maps, or segmentation maps that respectively indicate the structure, depth, or segmentation of different portions of digital images. Additionally, the disclosed systems can implement a patch match model to identify replacement pixels for filling regions of digital images according to the structure, depth, and/or segmentation of the digital images.

    Image inpainting with geometric and photometric transformations

    公开(公告)号:US12249051B2

    公开(公告)日:2025-03-11

    申请号:US17651435

    申请日:2022-02-17

    Applicant: Adobe Inc.

    Abstract: Techniques are disclosed for filling or otherwise replacing a target region of a primary image with a corresponding region of an auxiliary image. The filling or replacing can be done with an overlay (no subtractive process need be run on the primary image). Because the primary and auxiliary images may not be aligned, both geometric and photometric transformations are applied to the primary and/or auxiliary images. For instance, a geometric transformation of the auxiliary image is performed, to better align features of the auxiliary image with corresponding features of the primary image. Also, a photometric transformation of the auxiliary image is performed, to better match color of one or more pixels of the auxiliary image with color of corresponding one or more pixels of the primary image. The corresponding region of the transformed auxiliary image is then copied and overlaid on the target region of the primary image.

    DIGITAL IMAGE INPAINTING UTILIZING GLOBAL AND LOCAL MODULATION LAYERS OF AN INPAINTING NEURAL NETWORK

    公开(公告)号:US20250054116A1

    公开(公告)日:2025-02-13

    申请号:US18929330

    申请日:2024-10-28

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate inpainted digital images utilizing a cascaded modulation inpainting neural network. For example, the disclosed systems utilize a cascaded modulation inpainting neural network that includes cascaded modulation decoder layers. For example, in one or more decoder layers, the disclosed systems start with global code modulation that captures the global-range image structures followed by an additional modulation that refines the global predictions. Accordingly, in one or more implementations, the image inpainting system provides a mechanism to correct distorted local details. Furthermore, in one or more implementations, the image inpainting system leverages fast Fourier convolutions block within different resolution layers of the encoder architecture to expand the receptive field of the encoder and to allow the network encoder to better capture global structure.

    GENERATING ITERATIVE INPAINTING DIGITAL IMAGES VIA NEURAL NETWORK BASED PERCEPTUAL ARTIFACT SEGMENTATIONS

    公开(公告)号:US20240046429A1

    公开(公告)日:2024-02-08

    申请号:US17815418

    申请日:2022-07-27

    Applicant: Adobe Inc.

    CPC classification number: G06T5/005 G06T7/11 G06T2207/20084

    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating neural network based perceptual artifact segmentations in synthetic digital image content. The disclosed system utilizing neural networks to detect perceptual artifacts in digital images in connection with generating or modifying digital images. The disclosed system determines a digital image including one or more synthetically modified portions. The disclosed system utilizes an artifact segmentation machine-learning model to detect perceptual artifacts in the synthetically modified portion(s). The artifact segmentation machine-learning model is trained to detect perceptual artifacts based on labeled artifact regions of synthetic training digital images. Additionally, the disclosed system utilizes the artifact segmentation machine-learning model in an iterative inpainting process. The disclosed system utilizes one or more digital image inpainting models to inpaint in a digital image. The disclosed system utilizes the artifact segmentation machine-learning model detect perceptual artifacts in the inpainted portions for additional inpainting iterations.

    GENERATING NEURAL NETWORK BASED PERCEPTUAL ARTIFACT SEGMENTATIONS IN MODIFIED PORTIONS OF A DIGITAL IMAGE

    公开(公告)号:US20240037717A1

    公开(公告)日:2024-02-01

    申请号:US17815409

    申请日:2022-07-27

    Applicant: Adobe Inc.

    CPC classification number: G06T5/005 G06T7/194 G06T2207/20081 G06T2207/20084

    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating neural network based perceptual artifact segmentations in synthetic digital image content. The disclosed system utilizing neural networks to detect perceptual artifacts in digital images in connection with generating or modifying digital images. The disclosed system determines a digital image including one or more synthetically modified portions. The disclosed system utilizes an artifact segmentation machine-learning model to detect perceptual artifacts in the synthetically modified portion(s). The artifact segmentation machine-learning model is trained to detect perceptual artifacts based on labeled artifact regions of synthetic training digital images. Additionally, the disclosed system utilizes the artifact segmentation machine-learning model in an iterative inpainting process. The disclosed system utilizes one or more digital image inpainting models to inpaint in a digital image. The disclosed system utilizes the artifact segmentation machine-learning model detect perceptual artifacts in the inpainted portions for additional inpainting iterations.

    GENERATING MODIFIED DIGITAL IMAGES USING DEEP VISUAL GUIDED PATCH MATCH MODELS FOR IMAGE INPAINTING

    公开(公告)号:US20220292650A1

    公开(公告)日:2022-09-15

    申请号:US17202019

    申请日:2021-03-15

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating modified digital images utilizing a guided inpainting approach that implements a patch match model informed by a deep visual guide. In particular, the disclosed systems can utilize a visual guide algorithm to automatically generate guidance maps to help identify replacement pixels for inpainting regions of digital images utilizing a patch match model. For example, the disclosed systems can generate guidance maps in the form of structure maps, depth maps, or segmentation maps that respectively indicate the structure, depth, or segmentation of different portions of digital images. Additionally, the disclosed systems can implement a patch match model to identify replacement pixels for filling regions of digital images according to the structure, depth, and/or segmentation of the digital images.

    IMAGE INPAINTING WITH GEOMETRIC AND PHOTOMETRIC TRANSFORMATIONS

    公开(公告)号:US20220172331A1

    公开(公告)日:2022-06-02

    申请号:US17651435

    申请日:2022-02-17

    Applicant: Adobe Inc.

    Abstract: Techniques are disclosed for filling or otherwise replacing a target region of a primary image with a corresponding region of an auxiliary image. The filling or replacing can be done with an overlay (no subtractive process need be run on the primary image). Because the primary and auxiliary images may not be aligned, both geometric and photometric transformations are applied to the primary and/or auxiliary images. For instance, a geometric transformation of the auxiliary image is performed, to better align features of the auxiliary image with corresponding features of the primary image. Also, a photometric transformation of the auxiliary image is performed, to better match color of one or more pixels of the auxiliary image with color of corresponding one or more pixels of the primary image. The corresponding region of the transformed auxiliary image is then copied and overlaid on the target region of the primary image.

    Labeling Techniques for a Modified Panoptic Labeling Neural Network

    公开(公告)号:US20210357684A1

    公开(公告)日:2021-11-18

    申请号:US15930539

    申请日:2020-05-13

    Applicant: Adobe Inc.

    Abstract: A panoptic labeling system includes a modified panoptic labeling neural network (“modified PLNN”) that is trained to generate labels for pixels in an input image. The panoptic labeling system generates modified training images by combining training images with mask instances from annotated images. The modified PLNN determines a set of labels representing categories of objects depicted in the modified training images. The modified PLNN also determines a subset of the labels representing categories of objects depicted in the input image. For each mask pixel in a modified training image, the modified PLNN calculates a probability indicating whether the mask pixel has the same label as an object pixel. The modified PLNN generates a mask label for each mask pixel, based on the probability. The panoptic labeling system provides the mask label to, for example, a digital graphics editing system that uses the labels to complete an infill operation.

Patent Agency Ranking