Abstract:
The displacements of the drone are defined by piloting commands to take moving images of a target carrying the ground station. The system comprises means for adjusting the sight angle of the camera during the displacements of the drone and of the target, so that the images are centerd to the target, and means for generating flying instructions so that the distance between drone and target fulfills determined rules, these means being based on a determination of the GPS geographical position of the target with respect to the GPS geographical position of the drone, and of the angular position of the target with respect to a main axis of the drone. These means are also based on the analysis of a non-geographical signal produced by the target and received by the drone. The system allows freeing from the uncertainty of the GPS systems equipping this type of device.
Abstract:
The drone comprises: a vertical-view camera (132) pointing downward to pick up images of a scene of the ground overflown by the drone; gyrometer, magnetometer and accelerometer sensors (176); and an altimeter (174). Navigation means determine position coordinates (X, Y, Z) of the drone in an absolute coordinate system linked to the ground. These means are autonomous, operating without reception of external signals. They include image analysis means, adapted to derive a position signal from an analysis of known predetermined patterns (210), present in the scene picked up by the camera, and they implement a predictive-filter estimator (172) incorporating a representation of a dynamic model of the drone, with as an input the position signal, a horizontal speed signal, linear and rotational acceleration signals, and an altitude signal.
Abstract:
The method consists in taking, when the vehicle is running, images of the road markings delimiting the circulation lanes of the road, and in fully estimating, through an iterative process, the orientation of the camera with respect to the vehicle, based on the position of two lanes located side-by-side on the image. The calibration essentially comprises: correcting the position of the lane edges in the image (10, 12); estimating the residual pitch and yaw (16); updating the rotation matrix (18); estimating the residual roll (20); updating the rotation matrix (24). These steps are iterated until the corrective angles estimated by each module are negligible (22).
Abstract:
The method consists in taking, when the vehicle is running, images of the road markings delimiting the circulation lanes of the road, and in fully estimating, through an iterative process, the orientation of the camera with respect to the vehicle, based on the position of two lanes located side-by-side on the image. The calibration essentially comprises: correcting the position of the lane edges in the image (10, 12); estimating the residual pitch and yaw (16); updating the rotation matrix (18); estimating the residual roll (20); updating the rotation matrix (24). These steps are iterated until the corrective angles estimated by each module are negligible (22).