Abstract:
A method for processing sensor data in a system that includes multiple sensors for detecting at least a subarea of surroundings around the system. The method includes at least the following steps: a) reading in sensor data detected at least partially in parallel, b) checking whether an at least partial impairment of the detection by the respective sensor may be established for one or for multiple of the sensors on the basis of the read-in sensor data, c) adapting the use of the sensor data, taking the check from step b) into account.
Abstract:
A method for determining a sensor degradation status of a first sensor system includes: providing data of the first sensor system to represent the environment; providing data of a second sensor system to represent the environment; determining an individual blindness indicator for the first sensor system on the basis of sensor data exclusively of the first sensor system; determining at least one first environment-related determination variable based on the provided data of the first sensor system; determining at least one second environment-related determination variable based on the provided data of the second sensor system; determining a fusion blindness indicator based on a comparison of the at least one first environment-related determination variable with the at least one second environment-related determination variable; and determining the sensor degradation status of the first sensor system based on of the individual blindness indicator and the fusion blindness indicator.
Abstract:
A method for detecting imaging degradation of an imaging sensor includes (i) providing an image of a surrounding area, said image being generated by the imaging sensor; (ii) detecting imaging degradation for each sub-image of a plurality of sub-images of the image using a neural network trained for this purpose; and (iii) detecting the imaging degradation of the sensor, said imaging degradation exhibiting the ratio of the number of sub-images of the image with detected degradation to the plurality of sub-images.
Abstract:
A system includes a K1 preprocessing module designed to generate at least one intermediate image from an input image using a parameterized internal processing chain and an analysis module to detect a feature or object in the intermediate image. A method to train the system includes feeding a plurality of learning input images to the system, comparing a result provided by the analysis module for each of the learning input images to a learning value, and feeding back a deviation obtained by the comparison to an input preprocessing module and/or adapting parameters of the internal processing chain to reduce the deviation.
Abstract:
A method of transmissivity-aware chroma keying. The method includes: a) obtaining a first shot of at least one object in front of a first background or a first scene; b) obtaining a second shot of the at least one object in front of a second background or a second scene, which differs at least partially from the first background or the first scene; c) extracting the at least one object, using the first shot and the second shot.
Abstract:
A method for processing an image representing at least one halation. The image is read in via an interface to an image recording device. In addition, using the image an intensity distribution representing the halation is ascertained. The intensity distribution is then analyzed in order to determine a surface-shaped distribution of particles in the region of acquisition of the image recording device as the cause of the halation, and to distinguish it from a volume-shaped distribution of particles.
Abstract:
A method is provided for determining a visual range in daytime fog, the method (800) including a step of reading in and a step of ascertaining. In the step of reading in, coordinates of at least one characteristic point of a brightness curve of a camera image of the fog are read in. The brightness curve represents brightness values of image points of the camera image along a reference axis of the camera image. In the step of ascertaining, a meteorological visual range in the camera image is ascertained using the coordinates, a meteorological contrast threshold, and a processing specification, in order to estimate the visual range in fog. The processing specification images location-dependent and/or direction-dependent scattered light through the fog in the camera image.
Abstract:
A system includes a K1 preprocessing module designed to generate at least one intermediate image from an input image using a parameterized internal processing chain and an analysis module to detect a feature or object in the intermediate image. A method to train the system includes feeding a plurality of learning input images to the system, comparing a result provided by the analysis module for each of the learning input images to a learning value, and feeding back a deviation obtained by the comparison to an input preprocessing module and/or adapting parameters of the internal processing chain to reduce the deviation.
Abstract:
A method for processing an image representing at least one halation. The image is read in via an interface to an image recording device. In addition, using the image an intensity distribution representing the halation is ascertained. The intensity distribution is then analyzed in order to determine a surface-shaped distribution of particles in the region of acquisition of the image recording device as the cause of the halation, and to distinguish it from a volume-shaped distribution of particles.
Abstract:
A method for detecting a soiling of an optical component of a driving environment sensor for capturing a field surrounding a vehicle. An image signal, which represents at least one image region of at least one image captured by the driving environment sensor, is input here. The image signal is subsequently processed using at least one automatically trained classifier to detect the soiling in the image region.