Abstract:
A method and system for filtering an image frame of a video sequence from spurious motion, comprising the steps of dividing the image frame and a preceding image frame of the video sequence into blocks of pixels; determining motion vectors for the blocks of the image frame; determining inter-frame transformation parameters for the image frame based on the determined motion vectors; and generating a filtered image frame based on the determined inter-frame transformation parameters; wherein the image frame is dived into overlapping blocks.
Abstract:
In accordance with an embodiment, a method of detecting moving objects via a moving camera includes receiving a sequence of images from the moving camera; determining optical flow data from the sequence of images; decomposing the optical flow data into global motion related motion vectors and local object related motion vectors; calculating global motion parameters from the global motion related motion vectors; calculating moto-compensated vectors from the local object related motion vectors and the calculated global motion parameters; compensating the local object related motion vectors using the calculated global motion parameters; and clustering the compensated local object related motion vectors to generate a list of detected moving objects.
Abstract:
In accordance with an embodiment, a method of detecting moving objects via a moving camera includes receiving a sequence of images from the moving camera; determining optical flow data from the sequence of images; decomposing the optical flow data into global motion related motion vectors and local object related motion vectors; calculating global motion parameters from the global motion related motion vectors; calculating moto-compensated vectors from the local object related motion vectors and the calculated global motion parameters; compensating the local object related motion vectors using the calculated global motion parameters; and clustering the compensated local object related motion vectors to generate a list of detected moving objects.
Abstract:
A sequence of images obtained by a camera mounted on a vehicle is processed in order to generate Optical Flow data including a list of Motion Vectors being associated with respective features in the sequence of images. The Optical Flow data is analyzed to calculate a Vanishing Point by calculating the mean point of all intersections of straight lines passing through motion vectors lying in a road. An Horizontal Filter subset is determined taking into account the Vanishing Point and a Bound Box list from a previous frame in order to filter from the Optical Flow the horizontal motion vectors. The subset of Optical Flow is clustered to generate the Bound Box list retrieving the moving objects in a scene. The Bound Box list is sent to an Alert Generation device and an output video shows the input scene where the detected moving objects are surrounded by a Bounding Box.
Abstract:
In accordance with an embodiment, a method of detecting moving objects via a moving camera includes receiving a sequence of images from the moving camera; determining optical flow data from the sequence of images; decomposing the optical flow data into global motion related motion vectors and local object related motion vectors; calculating global motion parameters from the global motion related motion vectors; calculating moto-compensated vectors from the local object related motion vectors and the calculated global motion parameters; compensating the local object related motion vectors using the calculated global motion parameters; and clustering the compensated local object related motion vectors to generate a list of detected moving objects.
Abstract:
A panning device for processing relative motion vectors and absolute motion vectors obtained from a video sequence, includes: a panning filter module, such as a high-pass IIR filter, for subjecting relative motion vectors to panning processing, an adder module for adding the relative motion vectors subjected to panning in the panning filter module to absolute motion vectors to obtain respective summed values of motion vectors, a clipping module for subjecting the summed values of motion vectors obtained in the adder module to clipping according to a selected cropping window for obtaining final output absolute motion vectors, a first leak integrator arranged after the panning filter module, and a second leak integrator arranged after the clipping module.
Abstract:
A sequence of images is processed to generate optical flow data including a list of motion vectors. The motion vectors are grouped based on orientation into a first set of moving away motion vectors and a second set of moving towards motion vectors. A vanishing point is determined as a function of the first set of motion vectors and a center position of the images is determined. Pan and tilt information is computed from the distance difference between the vanishing point and the center position. Approaching objects are identified from the second set as a function of position, length and orientation, thereby identifying overtaking vehicles. Distances to the approaching objects are determined from object position, camera focal length, and pan and tilt information. A warning signal is issued as a function of the distances.
Abstract:
A sequence of images is processed to generate optical flow data including a list of motion vectors. The motion vectors are grouped based on orientation into a first set of moving away motion vectors and a second set of moving towards motion vectors. A vanishing point is determined as a function of the first set of motion vectors and a center position of the images is determined. Pan and tilt information is computed from the distance difference between the vanishing point and the center position. Approaching objects are identified from the second set as a function of position, length and orientation, thereby identifying overtaking vehicles. Distances to the approaching objects are determined from object position, camera focal length, and pan and tilt information. A warning signal is issued as a function of the distances.
Abstract:
According to an embodiment, a sequence of video frames as produced in a video-capture apparatus such as a video camera is stabilized against hand shaking or vibration by:—subjecting a pair of frames in the sequence to feature extraction and matching to produce a set of matched features;—subjecting the set of matched features to an outlier removal step; and—generating stabilized frames via motion-model estimation based on features resulting from outlier removal. Motion-model estimation is performed based on matched features having passed a zone-of-interest test confirmative that the matched features passing the test are distributed over a plurality of zones across the frames.
Abstract:
A method and system for filtering an image frame of a video sequence from spurious motion, comprising the steps of dividing the image frame and a preceding image frame of the video sequence into blocks of pixels; determining motion vectors for the blocks of the image frame; determining inter-frame transformation parameters for the image frame based on the determined motion vectors; and generating a filtered image frame based on the determined inter-frame transformation parameters; wherein the image frame is dived into overlapping blocks.