END-TO-END RELIGHTING OF A FOREGROUND OBJECT OF AN IMAGE

    公开(公告)号:US20210295571A1

    公开(公告)日:2021-09-23

    申请号:US16823092

    申请日:2020-03-18

    Applicant: Adobe Inc.

    Abstract: Introduced here are techniques for relighting an image by automatically segmenting a human object in an image. The segmented image is input to an encoder that transforms it into a feature space. The feature space is concatenated with coefficients of a target illumination for the image and input to an albedo decoder and a light transport detector to predict an albedo map and a light transport matrix, respectively. In addition, the output of the encoder is concatenated with outputs of residual parts of each decoder and fed to a light coefficients block, which predicts coefficients of the illumination for the image. The light transport matrix and predicted illumination coefficients are multiplied to obtain a shading map that can sharpen details of the image. Scaling the resulting image by the albedo map to produce the relight image. The relight image can be refined to denoise the relight image.

    Joint Visual-Semantic Embedding and Grounding via Multi-Task Training for Image Searching

    公开(公告)号:US20210271707A1

    公开(公告)日:2021-09-02

    申请号:US16803480

    申请日:2020-02-27

    Applicant: Adobe Inc.

    Abstract: Certain embodiments involve a method for generating a search result. The method includes processing devices performing operations including receiving a query having a text input by a joint embedding model trained to generate an image result. Training the joint embedding model includes accessing a set of images and textual information. Training further includes encoding the images into image feature vectors based on spatial features. Further, training includes encoding the textual information into textual feature vectors based on semantic information. Training further includes generating a set of image-text pairs based on matches between image feature vectors and textual feature vectors. Further, training includes generating a visual grounding dataset based on spatial information. Training further includes generating a set of visual-semantic joint embeddings by grounding the image-text pairs with the visual grounding dataset. Additionally, operations include generating an image result for display by the joint embedding model based on the text input.

    AUTOMATIC IMAGE CROPPING BASED ON ENSEMBLES OF REGIONS OF INTEREST

    公开(公告)号:US20210256656A1

    公开(公告)日:2021-08-19

    申请号:US17306249

    申请日:2021-05-03

    Applicant: ADOBE INC.

    Inventor: Jianming Zhang

    Abstract: A crop generation system determines multiple types of saliency data and multiple crop candidates for an image. Multiple region of interest (“ROI”) ensembles are generated, indicating locations of the salient content of the image. For each crop candidate, the crop generation system calculates an evaluation score. A set of crop candidates is selected based on the evaluation scores.

    Depth-of-field blur effects generating techniques

    公开(公告)号:US10810707B2

    公开(公告)日:2020-10-20

    申请号:US16204675

    申请日:2018-11-29

    Applicant: Adobe Inc.

    Abstract: Techniques of generating depth-of-field blur effects on digital images by digital effect generation system of a computing device are described. The digital effect generation system is configured to generate depth-of-field blur effects on objects based on focal depth value that defines a depth plane in the digital image and a aperture value that defines an intensity of blur effect applied to the digital image. The digital effect generation system is also configured to improve the accuracy with which depth-of-field blur effects are generated by performing up-sampling operations and implementing a unique focal loss algorithm that minimizes the focal loss within digital images effectively.

    Depth-of-Field Blur Effects Generating Techniques

    公开(公告)号:US20200175651A1

    公开(公告)日:2020-06-04

    申请号:US16204675

    申请日:2018-11-29

    Applicant: Adobe Inc.

    Abstract: Techniques of generating depth-of-field blur effects on digital images by digital effect generation system of a computing device are described. The digital effect generation system is configured to generate depth-of-field blur effects on objects based on focal depth value that defines a depth plane in the digital image and a aperture value that defines an intensity of blur effect applied to the digital image. The digital effect generation system is also configured to improve the accuracy with which depth-of-field blur effects are generated by performing up-sampling operations and implementing a unique focal loss algorithm that minimizes the focal loss within digital images effectively.

    Accurate tag relevance prediction for image search

    公开(公告)号:US10664719B2

    公开(公告)日:2020-05-26

    申请号:US15043174

    申请日:2016-02-12

    Applicant: ADOBE INC.

    Abstract: Embodiments of the present invention provide an automated image tagging system that can predict a set of tags, along with relevance scores, that can be used for keyword-based image retrieval, image tag proposal, and image tag auto-completion based on user input. Initially, during training, a clustering technique is utilized to reduce cluster imbalance in the data that is input into a convolutional neural network (CNN) for training feature data. In embodiments, the clustering technique can also be utilized to compute data point similarity that can be utilized for tag propagation (to tag untagged images). During testing, a diversity based voting framework is utilized to overcome user tagging biases. In some embodiments, bigram re-weighting can down-weight a keyword that is likely to be part of a bigram based on a predicted tag set.

    Guided image composition on mobile devices

    公开(公告)号:US10516830B2

    公开(公告)日:2019-12-24

    申请号:US15730614

    申请日:2017-10-11

    Applicant: Adobe Inc.

    Abstract: Various embodiments describe facilitating real-time crops on an image. In an example, an image processing application executed on a device receives image data corresponding to a field of view of a camera of the device. The image processing application renders a major view on a display of the device in a preview mode. The major view presents a previewed image based on the image data. The image processing application receives a composition score of a cropped image from a deep-learning system. The image processing application renders a sub-view presenting the cropped image based on the composition score in a preview mode. Based on a user interaction, the image processing application renders the cropped image in the major view with the sub-view in the preview mode.

    Image cropping suggestion using multiple saliency maps

    公开(公告)号:US10346951B2

    公开(公告)日:2019-07-09

    申请号:US15448138

    申请日:2017-03-02

    Applicant: Adobe Inc.

    Abstract: Image cropping suggestion using multiple saliency maps is described. In one or more implementations, component scores, indicative of visual characteristics established for visually-pleasing croppings, are computed for candidate image croppings using multiple different saliency maps. The visual characteristics on which a candidate image cropping is scored may be indicative of its composition quality, an extent to which it preserves content appearing in the scene, and a simplicity of its boundary. Based on the component scores, the croppings may be ranked with regard to each of the visual characteristics. The rankings may be used to cluster the candidate croppings into groups of similar croppings, such that croppings in a group are different by less than a threshold amount and croppings in different groups are different by at least the threshold amount. Based on the clustering, croppings may then be chosen, e.g., to present them to a user for selection.

Patent Agency Ranking