-
公开(公告)号:US11553872B2
公开(公告)日:2023-01-17
申请号:US16702895
申请日:2019-12-04
Applicant: L'OREAL
Inventor: Ruowei Jiang , Junwei Ma , He Ma , Eric Elmoznino , Irina Kezele , Alex Levinshtein , Julien Despois , Matthieu Perrot , Frederic Antoinin Raymond Serge Flament , Parham Aarabi
Abstract: There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.
-
公开(公告)号:US11645497B2
公开(公告)日:2023-05-09
申请号:US16683398
申请日:2019-11-14
Applicant: L'Oreal
Inventor: Eric Elmoznino , He Ma , Irina Kezele , Edmund Phung , Alex Levinshtein , Parham Aarabi
CPC classification number: G06N3/045 , G06F3/011 , G06N3/047 , G06N3/08 , G06N20/00 , G06T19/006 , G06T2207/20081
Abstract: Systems and methods relate to a network model to apply an effect to an image such as an augmented reality effect (e.g. makeup, hair, nail, etc.). The network model uses a conditional cycle-consistent generative image-to-image translation model to translate images from a first domain space where the effect is not applied and to a second continuous domain space where the effect is applied. In order to render arbitrary effects (e.g. lipsticks) not seen at training time, the effect's space is represented as a continuous domain (e.g. a conditional variable vector) learned by encoding simple swatch images of the effect, such as are available as product swatches, as well as a null effect. The model is trained end-to-end in an unsupervised fashion. To condition a generator of the model, convolutional conditional batch normalization (CCBN) is used to apply the vector encoding the reference swatch images that represent the makeup properties.
-
公开(公告)号:US11832958B2
公开(公告)日:2023-12-05
申请号:US18080331
申请日:2022-12-13
Applicant: L'OREAL
Inventor: Ruowei Jiang , Junwei Ma , He Ma , Eric Elmoznino , Irina Kezele , Alex Levinshtein , Julien Despois , Matthieu Perrot , Frederic Antoinin Raymond Serge Flament , Parham Aarabi
CPC classification number: A61B5/441 , G06N3/045 , G06N3/08 , G06T7/0012 , G06V10/454 , G06V10/82 , G06V40/171 , G06T2207/30088 , G06V40/174 , G06V40/18
Abstract: There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.
-
公开(公告)号:US11461931B2
公开(公告)日:2022-10-04
申请号:US16854975
申请日:2020-04-22
Applicant: L'Oreal
Inventor: Eric Elmoznino , Parham Aarabi , Yuze Zhang
Abstract: Provided are systems and methods to perform colour extraction from swatch images and to define new images using extracted colours. Source images may be classified using a deep learning net (e.g. a CNN) to indicate colour representation strength and drive colour extraction. A clustering classifier is trained to use feature vectors extracted by the net. Separately, pixel clustering is useful when extracting the colour. Cluster count can vary according to classification. In another manner, heuristics (with or without classification) are useful when extracting. Resultant clusters are evaluated against a set of (ordered) expected colours to determine a match. Instances of standardized swatch images may be defined from a template swatch image and respective extracted colours using image processing. The extracted colour may be presented in an augmented reality GUI such as a virtual try-on application and applied to a user image such as a selfie using image processing.
-
公开(公告)号:US11995703B2
公开(公告)日:2024-05-28
申请号:US18102139
申请日:2023-01-27
Applicant: L'OREAL
Inventor: Eric Elmoznino , Irina Kezele , Parham Aarabi
IPC: G06T5/50 , G06F18/214 , G06N20/00 , G06Q30/0601 , G06V10/764 , G06V10/774 , G06V10/778
CPC classification number: G06Q30/0631 , G06F18/214 , G06N20/00 , G06T5/50 , G06V10/764 , G06V10/774 , G06V10/7788
Abstract: Techniques are provided for computing systems, methods and computer program products to produce efficient image-to-image translation by adapting unpaired datasets for supervised learning. A first model (a powerful model) may be defined and conditioned using unsupervised learning to produce a synthetic paired dataset from the unpaired dataset, translating images from a first domain to a second domain and images from the second domain to the first domain. The synthetic data generated is useful as ground truths in supervised learning. The first model may be conditioned to overfit the unpaired dataset to enhance the quality of the paired dataset (e.g. the synthetic data generated). A run-time model such as for a target device is trained using the synthetic paired dataset and supervised learning. The run-time model is small and fast to meet the processing resources of the target device (e.g. a personal user device such as a smart phone, tablet, etc.)
-
公开(公告)号:US11861497B2
公开(公告)日:2024-01-02
申请号:US17565581
申请日:2021-12-30
Applicant: L'OREAL
Inventor: Alex Levinshtein , Cheng Chang , Edmund Phung , Irina Kezele , Wenzhangzhi Guo , Eric Elmoznino , Ruowei Jiang , Parham Aarabi
IPC: G06N3/08 , G06T7/11 , G06T7/90 , G06T1/20 , G06T11/00 , G06V10/44 , G06V40/16 , G06F18/21 , G06F18/24 , G06V10/82
CPC classification number: G06N3/08 , G06F18/21 , G06F18/24 , G06T1/20 , G06T7/11 , G06T7/90 , G06T11/001 , G06V10/454 , G06V10/82 , G06V40/165 , G06V40/171 , G06T2200/24 , G06T2207/10016 , G06T2207/10024 , G06T2207/20081 , G06T2207/20084
Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.
-
公开(公告)号:US11615516B2
公开(公告)日:2023-03-28
申请号:US17096774
申请日:2020-11-12
Applicant: L'OREAL
Inventor: Eric Elmoznino , Irina Kezele , Parham Aarabi
IPC: G06T5/50 , G06N20/00 , G06Q30/0601 , G06K9/62
Abstract: Techniques are provided for computing systems, methods and computer program products to produce efficient image-to-image translation by adapting unpaired datasets for supervised learning. A first model (a powerful model) may be defined and conditioned using unsupervised learning to produce a synthetic paired dataset from the unpaired dataset, translating images from a first domain to a second domain and images from the second domain to the first domain. The synthetic data generated is useful as ground truths in supervised learning. The first model may be conditioned to overfit the unpaired dataset to enhance the quality of the paired dataset (e.g. the synthetic data generated). A run-time model such as for a target device is trained using the synthetic paired dataset and supervised learning. The run-time model is small and fast to meet the processing resources of the target device (e.g. a personal user device such as a smart phone, tablet, etc.)
-
公开(公告)号:US20220122299A1
公开(公告)日:2022-04-21
申请号:US17565581
申请日:2021-12-30
Applicant: L'OREAL
Inventor: Alex LEVINSHTEIN , Cheng Chang , Edmund Phung , Irina Kezele , Wenzhangzhi Guo , Eric Elmoznino , Ruowei Jiang , Parham Aarabi
Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.
-
公开(公告)号:US11216988B2
公开(公告)日:2022-01-04
申请号:US16753214
申请日:2018-10-24
Applicant: L'OREAL
Inventor: Alex Levinshtein , Cheng Chang , Edmund Phung , Irina Kezele , Wenzhangzhi Guo , Eric Elmoznino , Ruowei Jiang , Parham Aarabi
Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.
-
-
-
-
-
-
-
-