Abstract:
This drone includes an image sensor configured to take an image of a scene including a plurality of objects, and an electronic determination device including an electronic detection module configured to detect, via a neural network, in the image taken by the image sensor, a representation of a potential target from among the plurality of objects represented, an input variable of the neural network being an image depending on the image taken, at least one output variable of the neural network being an indication relative to the representation of the potential target. A first output variable of the neural network is a set of coordinates defining a contour of a zone surrounding the representation of the potential target.
Abstract:
The system comprises a drone and a ground station. The ground station includes a console provided with a directional antenna adapted to be directed towards the drone maintain the quality of the wireless link with the latter, and virtual reality glasses rendering images taken by a camera of the drone. The system comprises means for determining the position of the drone with respect to a heading of the console, and means for including in the images (I) rendered in the virtual reality glasses a visual indication (C, Ce, G, Id) of misalignment of the drone with respect to the console heading. Although he is isolated from the external real environment, the pilot is able, based on this visual indication, to reorient the console, typically by turning on himself, so that the directional antenna thereof suitably points towards the drone.
Abstract:
A rotary-wing drone includes a drone body including an electronic card controlling the piloting of the drone and one or more linking arms, one or more propulsion units mounted on respective ones of the linking arms, and at least one obstacle sensor integral with the drone body, whose main direction of detection is located in a substantially horizontal plane. The drone additionally includes logic executing by a processor in the electronic card and adapted to perform the controlling by correcting the drone orientation—specifically the yaw orientation—of the drone in flight so as to maintain one of the at least one obstacle sensor in the direction of displacement of the drone.
Abstract:
The invention relates to a method of dynamically encoding flight data in a video, implemented in a drone, the drone comprising a video sensor and attitude sensors and/or altitude sensors. This method comprises, for successive images captured, a step of capturing flight data (E22) of the drone from the attitude sensors and/or the altitude sensors and a step of encoding the captured image (E23). It further includes a step of storing (E24), in a data container, the encoded image, a step of adding (E25) to the encoded image, in the data container, all or part of the flight data captured, and a step of storing (E26) said data container in a memory of the drone (10), and/or of transmission (E27), by the drone (10), of said data container to a remote device (16). The encoding of the video images comprises an MPEG-4 encoding (ISO/IEC 14496), and the data container is a track according to MPEG-4 Part 12, multiplexing according to a common clock said encoded image and said associated flight data.
Abstract:
Disclosed are embodiments of a rotary-wing drone that includes a drone body with two front linking arms and two rear linking arms extending from the drone body with a propulsion unit located on a distal end of the linking arms. The points of fixation of the front linking arms and the points of fixation of the rear linking arms are located at different respective heights with respect to the horizontal median plane of the drone body. The two front linking arms of the drone may form a first angle of inclination with respect to the horizontal median plane of the drone body and the two rear linking arms may form a second angle of inclination. Additionally, the linking arms of the drone may further be adapted to be folded over along the drone body.
Abstract:
This camera unit (14) comprises a high-resolution rolling shutter camera (16) and one or several low-resolution global shutter cameras (18), for example monochromic spectral cameras. All the cameras are oriented in the same direction and are able to be triggered together to collect simultaneously a high-resolution image (I0) and at least one low-resolution image (I1-I4) of a same scene viewed by the drone. Image processing means (22) determine the distortions of the wobble type present in the high-resolution image and absent from the low-resolution images, and combine the high-resolution image (I0) and the low-resolution images (I1-I4) to deliver as an output a high-resolution image (I0) corrected for these distortions.
Abstract:
The invention relates to a method for capturing a video using a camera on board a fixed-wing drone, the camera comprising an image sensor, the drone having, during flight, a drift angle between the longitudinal axis of the drone and a flight direction of the drone. This method comprises: determining the drift angle of the drone; and obtaining video by image acquisition corresponding to a zone with reduced dimensions relative to those of the image sensor, the position of the zone being determined as a function of the drift angle of the drone.
Abstract:
A drone that includes an automatic piloting system that receives internal and/or external piloting instructions, as well as data of instantaneous attitude (φ*, θ*), altitude (z*) and speed (V*) delivered by sensors. Set point calculation circuits calculate, as a function of a model of the aerodynamic behaviour of the drone in flight, roll (φ) and/or pitch (θ) set points and/or speed set points (V) and/or altitude set points (z) corresponding to the internal and/or external piloting instructions received. Correction and control circuits control the propulsion system and the drone control surface servomechanisms. A system further allows generating internally piloting instructions for autonomous flight modes such as automatic take-off or automatic landing.
Abstract:
The displacements of the drone are defined by piloting commands to take moving images of a target carrying the ground station. The system comprises means for adjusting the sight angle of the camera during the displacements of the drone and of the target, so that the images are centred to the target, and means for generating flying instructions so that the distance between drone and target fulfils determined rules, these means being based on a determination of the GPS geographical position of the target with respect to the GPS geographical position of the drone, and of the angular position of the target with respect to a main axis of the drone. These means are also based on the analysis of a non-geographical signal produced by the target and received by the drone. The system allows freeing from the uncertainty of the GPS systems equipping this type of device.
Abstract:
An antenna includes one or more elementary antennas with non-coplanar planar loops extending about a main axis in respective inclined planes. Each elementary antenna is formed by tracks of a structure printed on a circuit support extending in the inclined plane, with two imbricated planar loops tuned on frequencies includes in two respective distinct WiFi frequency bands. With a flexible circuit support, an antenna housing of the drone includes a conformed hollow cavity comprising a plurality of inclined planar faces, which are the counterparts of the inclined planes of the elementary antennas, against which bear these latter after deformation of the flexible support.