Abstract:
Driver assistance systems for detecting a structural barrier extending along a road. The driver assistance system may be mountable in a host vehicle. The camera may capture multiple image frames in the forward field of view of the camera. A processor may process motion of images of the barrier in the image frames. The camera may be a single camera. The motion of the images may be responsive to forward motion of the host vehicle and/or the motion of the images may be responsive to lateral motion of the host vehicle.
Abstract:
A conjugate comprising L-DOPA covalently linked to at least one y-aminobutyric acid (GABA) moiety, an ester and/or an addition salt thereof are disclosed, as well as uses thereof for treating a neurodegenerative disease or disorder.
Abstract:
A system mounted on a vehicle for detecting an obstruction on a surface of a window of the vehicle, a primary camera is mounted inside the vehicle behind the window. The primary camera is configured to acquire images of the environment through the window. A secondary camera is focused on an external surface of the window, and operates to image the obstruction. A portion of the window, i.e. window region is subtended respectively by the field of view of the primary camera and the field of view of the secondary camera. A processor processes respective sequences of image data from both the primary camera and the secondary camera.
Abstract:
Driver assistance systems for detecting a structural barrier extending along a road. The driver assistance system may be mountable in a host vehicle. The camera may capture multiple image frames in the forward field of view of the camera. A processor may process motion of images of the barrier in the image frames. The camera may be a single camera. The motion of the images may be responsive to forward motion of the host vehicle and/or the motion of the images may be responsive to lateral motion of the host vehicle.
Abstract:
Systems and methods use cameras to provide autonomous navigation features. In one implementation, a driver-assist object detection system is provided for a vehicle. One or more processing devices associated with the system receive at least two images from a plurality of captured images via a data interface. The device(s) analyze the first image and at least a second image to determine a reference plane corresponding to the roadway the vehicle is traveling on. The processing device(s) locate a target object in the first two images, and determine a difference in a size of at least one dimension of the target object between the two images. The system may use the difference in size to determine a height of the object. Further, the system may cause a change in at least a directional course of the vehicle if the determined height exceeds a predetermined threshold.
Abstract:
A method of estimating a time to collision (TTC) of a vehicle with an object comprising: acquiring a plurality of images of the object; and determining a TTC from the images that is responsive to a relative velocity and relative acceleration between the vehicle and the object.
Abstract:
A method of estimating a time to collision (TTC) of a vehicle with an object comprising: acquiring a plurality of images of the object; and determining a TTC from the images that is responsive to a relative velocity and relative acceleration between the vehicle and the object.
Abstract:
A method for providing a forward collision warning using a camera mountable in a motor vehicle. The method acquires multiple image frames at known time intervals. A patch may be selected in at least one of the image frames. Optical flow may be tracked between the image frames of multiple image points of the patch. Based on the fit of the image points to a model, a time-to collision (TTC) may be determined if a collision is expected. The image points may be fit to a road surface model and a portion of the image points is modeled to be imaged from a road surface. A collision is not expected based on the fir of the image points to the road surface model.
Abstract:
A system mounted in a vehicle for classifying light sources. The system includes a lens and a spatial image sensor. The lens is adapted to provide an image of a light source on the spatial image sensor. A diffraction grating is disposed between the lens and the light source. The diffraction grating is adapted for providing a spectrum. A processor is configured for classifying the light source as belonging to a class selected from a plurality of classes of light sources expected to be found in the vicinity of the vehicle, wherein the spectrum is used for the classifying of the light source. Both the image and the spectrum may be used for classifying the light source or the spectrum is used for classifying the light source and the image is used for another driver assistance application.
Abstract:
An imaging system for a vehicle may include a first image capture device having a first field of view and configured to acquire a first image relative to a scene associated with the vehicle, the first image being acquired as a first series of image scan lines captured using a rolling shutter. The imaging system may also include a second image capture device having a second field of view different from the first field of view and that at least partially overlaps the first field of view, the second image capture device being configured to acquire a second image relative to the scene associated with the vehicle, the second image being acquired as a second series of image scan lines captured using a rolling shutter. As a result of overlap between the first field of view and the second field of view, a first overlap portion of the first image corresponds with a second overlap portion of the second image. The first image capture device has a first scan rate associated with acquisition of the first series of image scan lines that is different from a second scan rate associated with acquisition of the second series of image scan lines, such that the first image capture device acquires the first overlap portion of the first image over a period of time during which the second overlap portion of the second image is acquired.