Abstract:
An imaging system for a vehicle includes an imaging sensor and a control. The imaging sensor is operable to capture an image of a scene occurring exteriorly of the vehicle. The control receives the captured image, which comprises an image data set representative of the exterior scene. The control may apply an edge detection algorithm to a reduced image data set of the image data set. The reduced image data set is representative of a target zone of the captured image. The control may be operable to process the reduced image data set more than other image data, which are representative of areas of the captured image outside of the target zone, to detect objects present within the target zone. The imaging system may be associated with a side object detection system, a lane change assist system, a lane departure warning system and/or the like.
Abstract:
A image sequence is inputted (200) from the camera and vertical motion is estimated (202). A windowed horizontal edge projection (204) is extracted from the inputted image sequence (200) and the horizontal edges are projected (206). The horizontal edge projection (206) and the vertical motion estimation (202) are combined in a horizontal segmentation and tracking element (208), and forwarded to an object parameter estimation element (210) where the object's distance and height are estimated. This data is combined in a fusion with radar detection element (212). By correctly matching the overhead objects sensed by the radar and video camera, the proximity and relative speed can be ascertained. Once overhead objects have been identified they can be isolated and not considered for collision avoidance purposes.
Abstract:
A vehicular imaging system for determining roadway width includes an image sensor for capturing images and an image processor for receiving the captured images. The image processor determines roadway width by identifying roadway marker signs and oncoming traffic in processed images captured by the image sensor and determining the number of lanes, vehicle location on the roadway based on the roadway size and/or width and location of oncoming traffic.
Abstract:
Die Erfindung betrifft ein Verfahren zur Unterstützung eines Nutzers eines Fahrzeugs (2), bei dem über Sensoren (16) des Fahrzeuges (2) Fahrzustandsgrößen (v, a, q, ω, n) erfasst oder ermittelt werden und eine Kamera (12) des Fahrzeuges (2) einen Erfassungsbereich (14) einer Straßenszene (1) zumindest vor dem Fahrzeug (2) erfasst und Bildsignale (S12) ausgibt. Aus den Bildsignalen (S12) wird ermittelt, ob sich in dem Erfassungsbereich (14) ein weiteres Fahrzeug (7, 8, 9) befindet, das Blinksignale ausgibt. In Abhängigkeit der ermittelten Fahrzustandsgrößen (v, a, q, ω, n) des Fahrzeuges (2) und in Abhängigkeit der Ermittlung, ob andere Fahrzeuge (7, 8, 9) einen Richtungswechsel anzeigen, kann die Ausgabe von Informationssignalen, insbesondere Warnsignalen, an den Nutzer erfolgen und/oder eine selbsttätige Fahrerassistenzregelung durchgeführt werden, bei der Steuerungssignale für Eingriffe in eine Fahrzeug-Steuerung für eine Längs- und/oder Querregelung, insbesondere eine Abstandsregelung, ausgegeben werden. Hierbei können jeweils verschiedene mögliche Steuereingriffe ermittelt und durchgeführt werden. Weiterhin sind eine entsprechende Steuereinrichtung und das hierdurch ermöglichte Fahrzeug vorgesehen.
Abstract:
The present invention provides a collision avoidance apparatus and method employing stereo vision applications for adaptive vehicular control The stereo vision applications are comprised of a road detection function and a vehicle detection and tracking function. The road detection function makes use of three-dimensional point data, computed from stereo image data, to locate the road surface ahead of a host vehicle information gathered by the road detection function is used to guide the vehicle detection and tracking function, which provides lead motion data to a vehicular control system of the collision avoidance apparatus. Similar to the road detection function, stereo image data is used by the vehicle detection and tracking function to determine the depth of image scene features, thereby providing a robust means for identifying potential lead vehicles in a headway direction of the host vehicle.
Abstract:
A lane detection apparatus for a host vehicle (10), the apparatus comprising: a first sensing means (14), which provides a first set of data dependent upon features of a part of the road ahead of the host vehicle; a second sensing means (13), which provides a second set of data dependent upon features of a part of the road ahead of the host vehicle; and a processing means (17) arranged to estimate the location of lane boundaries (11, 12) by interpreting the data captured by both sensing means. The second sensing means (13) may have different performance characteristics to the first sensing means (14). One or more of the sensing means may include a pre-processing means (15, 16), which is arranged to process the "raw" data provided by the sensing means to produce estimated lane boundary position data indicative of an estimate of the location of lane boundaries (11, 12). The fusion of the data points can be performed in many ways, but in each case the principle is that more reliable raw data points or de-constructed data points are given preference over, or are more dominant than, less reliable data points. How reliable the points are at a given range is determined by allocating a weighting to the data values according to which sensing means produced the data and to what range the data values correspond.
Abstract:
Die Erfindung betrifft ein Sicherheitssystern (1) in einem Kraftfahrzeug mit zumindest einer Vorrichtung zur Erfassung eines kritischen Fahrzeugzustandes und mit zumindest einer Insassenschutzeinrichtung, wobei bei Erkennen eines kritischen Fahrzeugzustandes in Abhängigkeit mindestens eines Signals mindestens eines Sensors der Vorrichtung zur Erfassung des kritischen Fahrzeugzustandes zumindest ein Signal zur Vorkonditionierung und/oder Aktivierung und/oder Auslösung der zurnindest einen Insassenschutzeinrichtung generierbar ist, wobei die zurninclest eine Vorrichtung zur Erfassung eines kritischen Fahrzeugzu standes ein Fahrspur-Assistenz-Systern (2) ist, mit dem mindestens einem Sensor ein unbeabsichtigtes Verlassen der Fahrspur erfassbar ist und aus dem mindestens einen Signal des mindestens einen Sensors der Grad einer Fahrspur-Überschreitung ableitbar ist.
Abstract:
The invention relates to a method and a device for detecting the position of a vehicle (F1-F4) in a given area (100), especially a storage facility. The inventive method comprises the following steps: the size and angle of incremental movement vectors relating to the movement of the vehicle (F-1) is detected; a respective reference position of the vehicle (F1-F4) is automatically determined at predetermined locations (O1-O4) inside the given area (100) whenever the vehicle (F1-F4) passes a corresponding location (O1-O4); the current position of the vehicle (F1-F4) inside the given area (100) is detected by means of vectorial summation of the detected incremental movement vectors with respect to the location vector of the temporary reference position. Automatic determination is carried out by a sensor (L1, L2, MS) which is arranged on the vehicle (F1-F4) and interacts in a contactless manner with a respective reference marking (MS) in the corresponding location (O1-O4) exhibiting reflecting and non-reflecting areas (R1, R2; D) which are scanned simultaneously by the vehicle (F1-F4) by means of at least two signals (ST1,ST2). The coordinates (x,y) of the reference position and, optionally, the angle of passage ( alpha ) are determined by evaluating the variation in time of the reflected intensity of said signals (ST1,ST2).