Eye texture inpainting
    31.
    发明授权

    公开(公告)号:US11468544B2

    公开(公告)日:2022-10-11

    申请号:US17355687

    申请日:2021-06-23

    Applicant: Snap Inc.

    Abstract: Systems, devices, media, and methods are presented for generating texture models for objects within a video stream. The systems and methods access a set of images as the set of images are being captured at a computing device. The systems and methods determine, within a portion of the set of images, an area of interest containing an eye and extract an iris area from the area of interest. The systems and methods segment a sclera area within the area of interest and generate a texture for the eye based on the iris area and the sclera area.

    MOTION REPRESENTATIONS FOR ARTICULATED ANIMATION

    公开(公告)号:US20210407163A1

    公开(公告)日:2021-12-30

    申请号:US17364218

    申请日:2021-06-30

    Applicant: Snap Inc.

    Abstract: Systems and methods herein describe novel motion representations for animating articulated objects consisting of distinct parts. The described systems and method access source image data, identify driving image data to modify image feature data in the source image sequence data, generate, using an image transformation neural network, modified source image data comprising a plurality of modified source images depicting modified versions of the image feature data, the image transformation neural network being trained to identify, for each image in the source image data, a driving image from the driving image data, the identified driving image being implemented by the image transformation neural network to modify a corresponding source image in the source image data using motion estimation differences between the identified driving image and the corresponding source image, and stores the modified source image data.

    EYE TEXTURE INPAINTING
    34.
    发明申请

    公开(公告)号:US20210319540A1

    公开(公告)日:2021-10-14

    申请号:US17355687

    申请日:2021-06-23

    Applicant: Snap Inc.

    Abstract: Systems, devices, media, and methods are presented for generating texture models for objects within a video stream. The systems and methods access a set of images as the set of images are being captured at a computing device. The systems and methods determine, within a portion of the set of images, an area of interest containing an eye and extract an iris area from the area of interest. The systems and methods segment a sclera area within the area of interest and generate a texture for the eye based on the iris area and the sclera area.

    Generating virtual hairstyle using latent space projectors

    公开(公告)号:US12277639B2

    公开(公告)日:2025-04-15

    申请号:US18149007

    申请日:2022-12-30

    Applicant: Snap Inc.

    Abstract: Embodiments enable virtual hair generation. The virtual hair generation can be performed by generating a first image of a face using a GAN model, applying 3D virtual hair on the first image to generate a second image with 3D virtual hair, projecting the second image with 3D virtual hair into a GAN latent space to generate a third image with virtual hair, performing a blend of the virtual hair with the first image of the face to generate a new image with new virtual hair that corresponds to the 3D virtual hair, training a neural network that receives the second image with the 3D virtual hair and provides an output image with virtual hair, and generating using the trained neural network, a particular output image with hair based on a particular input image with 3D virtual hair.

    Motion representations for articulated animation

    公开(公告)号:US11836835B2

    公开(公告)日:2023-12-05

    申请号:US17364218

    申请日:2021-06-30

    Applicant: Snap Inc.

    Abstract: Systems and methods herein describe novel motion representations for animating articulated objects consisting of distinct parts. The described systems and method access source image data, identify driving image data to modify image feature data in the source image sequence data, generate, using an image transformation neural network, modified source image data comprising a plurality of modified source images depicting modified versions of the image feature data, the image transformation neural network being trained to identify, for each image in the source image data, a driving image from the driving image data, the identified driving image being implemented by the image transformation neural network to modify a corresponding source image in the source image data using motion estimation differences between the identified driving image and the corresponding source image, and stores the modified source image data.

    VIDEO SYNTHESIS WITHIN A MESSAGING SYSTEM

    公开(公告)号:US20220101104A1

    公开(公告)日:2022-03-31

    申请号:US17491226

    申请日:2021-09-30

    Applicant: Snap Inc.

    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for video synthesis. The program and method provide for accessing a primary generative adversarial network (GAN) comprising a pre-trained image generator, a motion generator comprising a plurality of neural networks, and a video discriminator; generating an updated GAN based on the primary GAN, by performing operations comprising identifying input data of the updated GAN, the input data comprising an initial latent code and a motion domain dataset, training the motion generator based on the input data, and adjusting weights of the plurality of neural networks of the primary GAN based on an output of the video discriminator; and generating a synthesized video based on the primary GAN and the input data.

Patent Agency Ranking