-
1.
公开(公告)号:US20210350185A1
公开(公告)日:2021-11-11
申请号:US17383465
申请日:2021-07-23
Applicant: Cognata Ltd.
Inventor: Dan ATSMON , Eran ASA , Ehud SPIEGEL
IPC: G06K9/62 , G06T7/55 , G06T7/579 , G05D1/00 , G05D1/02 , G06K9/00 , G06N3/04 , G06N3/08 , G06T3/00
Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
-
2.
公开(公告)号:US20230306680A1
公开(公告)日:2023-09-28
申请号:US18202970
申请日:2023-05-29
Applicant: Cognata Ltd.
Inventor: Dan ATSMON , Eran ASA , Ehud SPIEGEL
IPC: G06T15/10 , G06T7/55 , G06T7/579 , G05D1/00 , G05D1/02 , G06N3/04 , G06N3/08 , G06T3/00 , G06V20/56 , G06F18/21 , G06F18/24 , G06F18/28 , G06F18/214
CPC classification number: G06T15/10 , G06T7/55 , G06T7/579 , G05D1/0088 , G05D1/0246 , G06N3/04 , G06N3/08 , G06T3/0018 , G06V20/56 , G06F18/217 , G06F18/24 , G06F18/28 , G06F18/2148 , G05D2201/0213 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30252 , G06V2201/07
Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
-
3.
公开(公告)号:US20210312244A1
公开(公告)日:2021-10-07
申请号:US17286526
申请日:2019-10-15
Applicant: Cognata Ltd.
Inventor: Dan ATSMON , Eran ASA , Ehud SPIEGEL
Abstract: A method for training a model for generating simulation data for training an autonomous driving agent, comprising: analyzing real data, collected from a driving environment, to identify a plurality of environment classes, a plurality of moving agent classes, and a plurality of movement pattern classes; generating a training environment, according to one environment class; and in at least one training iteration: generating, by a simulation generation model, a simulated driving environment according to the training environment and according to a plurality of generated training agents, each associated with one of the plurality of agent classes and one of the plurality of movement pattern classes; collecting simulated driving data from the simulated environment; and modifying at least one model parameter of the simulation generation model to minimize a difference between a simulation statistical fingerprint, computed using the simulated driving data, and a real statistical fingerprint, computed using the real data.
-
4.
公开(公告)号:US20200210779A1
公开(公告)日:2020-07-02
申请号:US16594200
申请日:2019-10-07
Applicant: Cognata Ltd.
Inventor: Dan ATSMON , Eran ASA , Ehud SPIEGEL
IPC: G06K9/62 , G06T7/579 , G06T3/00 , G06T7/55 , G06K9/00 , G05D1/02 , G05D1/00 , G06N3/08 , G06N3/04
Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
-
公开(公告)号:US20230202511A1
公开(公告)日:2023-06-29
申请号:US17926598
申请日:2021-05-27
Applicant: Cognata Ltd.
Inventor: Dan ATSMON , Ehud SPIEGEL
IPC: B60W60/00
CPC classification number: B60W60/001 , B60W2554/4049 , B60W2420/42
Abstract: A system for generating simulated driving scenarios, comprising at least one hardware processor adapted for generating a plurality of simulated driving scenarios, each generated by providing a plurality of input driving objects to a machine learning model, where the machine learning model is trained using another machine learning model, trained to compute a classification indicative of a likelihood that a simulated driving scenario produced by the machine learning model comprises an interesting driving scenario.
-
公开(公告)号:US20230316789A1
公开(公告)日:2023-10-05
申请号:US18022556
申请日:2021-09-14
Applicant: Cognata Ltd.
Inventor: Ilan TSAFRIR , Guy TSAFRIR , Ehud SPIEGEL , Dan ATSMON
CPC classification number: G06V20/70 , G06V20/58 , G06V10/7715
Abstract: There is provided a method for annotating digital images for training a machine learning model, comprising: generating, from digital images and a plurality of dense depth maps, each associated with one of the digital images, an aligned three-dimensional stacked scene representation of a scene, where the digital images are captured by sensor(s) at the scene, and where each point in the three-dimensional stacked scene is associated with a stability score indicative of a likelihood the point is associated with a static object of the scene, removing from the three-dimensional stacked scene unstable points to produce a static three-dimensional stacked scene, detecting in at least one of the digital images static object(s) according to the static three-dimensional stacked scene, and classifying and annotating the static object(s). The machine learning model may be trained on the images annotated with a ground truth of the static object(s).
-
7.
公开(公告)号:US20220188579A1
公开(公告)日:2022-06-16
申请号:US17687720
申请日:2022-03-07
Applicant: Cognata Ltd.
Inventor: Dan ATSMON , Eran ASA , Ehud SPIEGEL
Abstract: A method for training a model for generating simulation data for training an autonomous driving agent, comprising: analyzing real data, collected from a driving environment, to identify a plurality of environment classes, a plurality of moving agent classes, and a plurality of movement pattern classes; generating a training environment, according to one environment class; and in at least one training iteration: generating, by a simulation generation model, a simulated driving environment according to the training environment and according to a plurality of generated training agents, each associated with one of the plurality of agent classes and one of the plurality of movement pattern classes; collecting simulated driving data from the simulated environment; and modifying at least one model parameter of the simulation generation model to minimize a difference between a simulation statistical fingerprint, computed using the simulated driving data, and a real statistical fingerprint, computed using the real data.
-
-
-
-
-
-