-
公开(公告)号:US20240199071A1
公开(公告)日:2024-06-20
申请号:US18542857
申请日:2023-12-18
Applicant: Cognata Ltd.
Inventor: Dan ATSMON
CPC classification number: B60W60/001 , B60W50/06 , G06F30/27 , B60W2050/0019 , B60W2556/00
Abstract: A method for generating a driving assistant model, comprising: computing at least one semantic driving scenario by computing at least one permutation of at least one initial semantic driving scenario; providing the at least one semantic driving scenario to a simulation generator to produce simulated driving data describing at least one simulated driving environment; training a driving assistant model using the simulated driving data to produce a trained driving assistant model; and providing by the trained driving assistant model at least one driving instruction to at least one autonomous driving model while the at least one autonomous driving model is operating.
-
2.
公开(公告)号:US20220383591A1
公开(公告)日:2022-12-01
申请号:US17885633
申请日:2022-08-11
Applicant: Cognata Ltd.
Inventor: Dan ATSMON
Abstract: A computer implemented method of creating a simulated realistic virtual model of a geographical area for training an autonomous driving system, comprising obtaining geographic map data of a geographical area, obtaining visual imagery data of the geographical area, classifying static objects identified in the visual imagery data to corresponding labels to designate labeled objects, superimposing the labeled objects over the geographic map data, generating a virtual 3D realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the labeled objects and injecting synthetic 3D imaging feed of the realistic model to imaging sensor(s) input(s) of the autonomous driving system controlling movement of an emulated vehicle in the realistic model where the synthetic 3D imaging feed is generated to depict the realistic model from a point of view of emulated imaging sensor(s) mounted on the emulated vehicle.
-
3.
公开(公告)号:US20210350185A1
公开(公告)日:2021-11-11
申请号:US17383465
申请日:2021-07-23
Applicant: Cognata Ltd.
Inventor: Dan ATSMON , Eran ASA , Ehud SPIEGEL
IPC: G06K9/62 , G06T7/55 , G06T7/579 , G05D1/00 , G05D1/02 , G06K9/00 , G06N3/04 , G06N3/08 , G06T3/00
Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
-
公开(公告)号:US20240394518A1
公开(公告)日:2024-11-28
申请号:US18675239
申请日:2024-05-28
Applicant: Cognata Ltd.
Inventor: Dan ATSMON , Alon Avraham ATSMON
IPC: G06N3/0475 , B60W60/00
Abstract: A method for generating training data for a machine learning model comprising: accessing a plurality of output values of a machine learning model computed in response to a plurality of input data samples; analyzing the plurality of output values and the plurality of input data samples to compute a plurality of required data sample characteristics associated with at least one unsatisfactory output value of the plurality of output values; generating at least one new input data sample by providing a data generator with a plurality of generation constraints comprising the plurality of required data sample characteristics; and adding the at least one new input data sample to a data repository for producing training data for the machine learning model; wherein the at least one new input data sample comprises at least part of a simulated driving environment for training the machine learning model to operate in an autonomous automotive system.
-
公开(公告)号:US20230202511A1
公开(公告)日:2023-06-29
申请号:US17926598
申请日:2021-05-27
Applicant: Cognata Ltd.
Inventor: Dan ATSMON , Ehud SPIEGEL
IPC: B60W60/00
CPC classification number: B60W60/001 , B60W2554/4049 , B60W2420/42
Abstract: A system for generating simulated driving scenarios, comprising at least one hardware processor adapted for generating a plurality of simulated driving scenarios, each generated by providing a plurality of input driving objects to a machine learning model, where the machine learning model is trained using another machine learning model, trained to compute a classification indicative of a likelihood that a simulated driving scenario produced by the machine learning model comprises an interesting driving scenario.
-
6.
公开(公告)号:US20200098172A1
公开(公告)日:2020-03-26
申请号:US16693534
申请日:2019-11-25
Applicant: Cognata Ltd.
Inventor: Dan ATSMON
Abstract: A computer implemented method of creating a simulated realistic virtual model of a geographical area for training an autonomous driving system, comprising obtaining geographic map data of a geographical area, obtaining visual imagery data of the geographical area, classifying static objects identified in the visual imagery data to corresponding labels to designate labeled objects, superimposing the labeled objects over the geographic map data, generating a virtual 3D realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the labeled objects and injecting synthetic 3D imaging feed of the realistic model to imaging sensor(s) input(s) of the autonomous driving system controlling movement of an emulated vehicle in the realistic model where the synthetic 3D imaging feed is generated to depict the realistic model from a point of view of emulated imaging sensor(s) mounted on the emulated vehicle.
-
公开(公告)号:US20240403717A1
公开(公告)日:2024-12-05
申请号:US18699632
申请日:2022-10-25
Applicant: Cognata Ltd.
Inventor: Dan ATSMON
IPC: G06N20/00
Abstract: A method for training a computer-vision based perception model, comprising: increasing diversity of backgrounds behind objects in synthetic training data by: inserting into a scene in simulation data at least one simulation object distributed around a sensor position in the scene, such that the at least one simulation object is oriented towards the sensor position, to produce new simulation data; and computing at least one simulated sensor signal using the new simulation data, simulating at least one signal captured by a simulated sensor located in the sensor position; and providing the new simulation data and the at least one simulated sensor signal as synthetic training data to at least one computer-vision based perception model for training the model to detect and additionally or alternatively classify one or more objects in one or more sensor signals.
-
公开(公告)号:US20240257410A1
公开(公告)日:2024-08-01
申请号:US18429533
申请日:2024-02-01
Applicant: Cognata Ltd.
Inventor: Dan ATSMON , Guy GOLDNER , Ilan TSAFRIR
CPC classification number: G06T11/001 , B60W50/00 , G01S17/89
Abstract: A system for generating synthetic data, comprising at least one processing circuitry adapted for: computing a sequence of partial simulation images, where each of the sequence of partial simulation images is associated with an estimated simulation time and with part of a simulated environment at the respective estimated simulation time thereof; computing at least one simulated point-cloud, each simulating a point-cloud captured in a capture interval by a sensor operated in a scanning pattern from an environment equivalent to a simulated environment, by applying to each partial simulation image of the sequence of partial simulation images a capture mask computed according to the scanning pattern and a relation between the capture interval and an estimated simulation time associated with the partial simulation image; and providing the at least one simulated point-cloud to a training engine to train a perception system comprising the sensor.
-
9.
公开(公告)号:US20230306680A1
公开(公告)日:2023-09-28
申请号:US18202970
申请日:2023-05-29
Applicant: Cognata Ltd.
Inventor: Dan ATSMON , Eran ASA , Ehud SPIEGEL
IPC: G06T15/10 , G06T7/55 , G06T7/579 , G05D1/00 , G05D1/02 , G06N3/04 , G06N3/08 , G06T3/00 , G06V20/56 , G06F18/21 , G06F18/24 , G06F18/28 , G06F18/214
CPC classification number: G06T15/10 , G06T7/55 , G06T7/579 , G05D1/0088 , G05D1/0246 , G06N3/04 , G06N3/08 , G06T3/0018 , G06V20/56 , G06F18/217 , G06F18/24 , G06F18/28 , G06F18/2148 , G05D2201/0213 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30252 , G06V2201/07
Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
-
10.
公开(公告)号:US20210312244A1
公开(公告)日:2021-10-07
申请号:US17286526
申请日:2019-10-15
Applicant: Cognata Ltd.
Inventor: Dan ATSMON , Eran ASA , Ehud SPIEGEL
Abstract: A method for training a model for generating simulation data for training an autonomous driving agent, comprising: analyzing real data, collected from a driving environment, to identify a plurality of environment classes, a plurality of moving agent classes, and a plurality of movement pattern classes; generating a training environment, according to one environment class; and in at least one training iteration: generating, by a simulation generation model, a simulated driving environment according to the training environment and according to a plurality of generated training agents, each associated with one of the plurality of agent classes and one of the plurality of movement pattern classes; collecting simulated driving data from the simulated environment; and modifying at least one model parameter of the simulation generation model to minimize a difference between a simulation statistical fingerprint, computed using the simulated driving data, and a real statistical fingerprint, computed using the real data.
-
-
-
-
-
-
-
-
-