-
公开(公告)号:US20230123037A1
公开(公告)日:2023-04-20
申请号:US18080331
申请日:2022-12-13
Applicant: L'OREAL
Inventor: Ruowei JIANG , Junwei MA , He MA , Eric ELMOZNINO , Irina KEZELE , Alex LEVINSHTEIN , Julien DESPOIS , Matthieu PERROT , Frederic Antoinin Raymond Serge FLAMENT , Parham AARABI
Abstract: There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.
-
2.
公开(公告)号:US20200160153A1
公开(公告)日:2020-05-21
申请号:US16683398
申请日:2019-11-14
Applicant: L'Oreal
Inventor: Eric ELMOZNINO , He MA , Irina KEZELE , Edmund PHUNG , Alex LEVINSHTEIN , Parham AARABI
Abstract: Systems and methods relate to a network model to apply an effect to an image such as an augmented reality effect (e.g. makeup, hair, nail, etc.). The network model uses a conditional cycle-consistent generative image-to-image translation model to translate images from a first domain space where the effect is not applied and to a second continuous domain space where the effect is applied. In order to render arbitrary effects (e.g. lipsticks) not seen at training time, the effect's space is represented as a continuous domain (e.g. a conditional variable vector) learned by encoding simple swatch images of the effect, such as are available as product swatches, as well as a null effect. The model is trained end-to-end in an unsupervised fashion. To condition a generator of the model, convolutional conditional batch normalization (CCBN) is used to apply the vector encoding the reference swatch images that represent the makeup properties.
-
公开(公告)号:US20220122299A1
公开(公告)日:2022-04-21
申请号:US17565581
申请日:2021-12-30
Applicant: L'OREAL
Inventor: Alex LEVINSHTEIN , Cheng Chang , Edmund Phung , Irina Kezele , Wenzhangzhi Guo , Eric Elmoznino , Ruowei Jiang , Parham Aarabi
Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.
-
公开(公告)号:US20200320748A1
公开(公告)日:2020-10-08
申请号:US16753214
申请日:2018-10-24
Applicant: L'OREAL
Inventor: Alex LEVINSHTEIN , Cheng CHANG , Edmund PHUNG , Irina KEZELE , Wenzhangzhi GUO , Eric ELMOZNINO , Ruowei JIANG , Parham AARABI
Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.
-
-
-