Abstract:
With the improved path finding method a self-propelled mobile unit determines a path around obstacles. A heuristic avoidance strategy is used, whereby the mobile unit, when encountering an obstacle, first evades by taking a first direction and implements an avoidance procedure up to a limit value if it cannot move around the obstacle. Subsequently, the unit returns to the point of departure of the avoidance maneuver and attempts to evade in the other direction. If it is likewise not possible to move around the obstacle in this direction before the limit value is reached, the barrier for the deviation in the avoidance maneuver is incremented. The unit then again attempts to move around the obstacle, first in the original avoidance direction. The limit value is increased again after respectively two more abortive attempts. The procedure is implemented until a path around the obstacle has been found and the destination can be reached. This avoids situations wherein a self-propelled mobile unit remains captured in an endless loop due to an avoidance maneuver in front of an extensive obstacle, whereby it attempts to move around the obstacle toward the right and toward the left but is prevented by the evasion limit from covering enough distance to move around the obstacle.
Abstract:
Different bonus values and penalty values are allotted for each partial task of the unit such as, for example, drive from A to B, hold your positional uncertainty below a specific threshold, or draw up a map of the surroundings and add landmarks to it. Performance weightings for the individual tasks are yielded, in conjunction with a need to carry the latter out, after analysis of the bonus values and penalty values, and are evaluated in a control unit. Furthermore, in the context of the method a local planning horizon is specified in which the surroundings of the unit are subdivided into grid cells. Preferred directions, which lead the unit by the shortest path to already known or unconfirmed landmarks are stored for these grid cells, with the aim of reducing the positional uncertainty, or of being able to confirm a landmark. All the different routes which are possible within the framework of this grid are then investigated as to what contribution they make to enable the unit to reach the goal. In this process, the different costs and benefits per partial task are added up along each path. That route is selected which has the greatest benefit or the lowest loss. Finally, a destination which is situated outside the local planning horizon is reached by carrying out the method cyclically.
Abstract:
A beacon navigation system for a vehicle including a plurality of navigation beacons distributed about a premises through which the vehicle is to navigate, and a detector assembly for sensing a beacon and for resolving the azimuthal angle between the beacon and the vehicle. The system further includes an element for defining an optimum azimuthal angle between that beacon and the vehicle, and a device for determining the difference between the resolved angle and the optimum angle to represent the deviation of the vehicle from a designated path. A method of establishing navigational paths among navigation nodes proximate beacons is also disclosed.
Abstract:
A system for inspecting surfaces that includes a mobile base, sensors for base navigation, sensors for surface inspection, a communication system and a host computer that executes modules for base motion planning and navigation, location, point cloud acquisition and processing, surface modelling and analysis, multi module coordination and user interfaces. The inspection procedure has the robot move in a zigzag pattern trajectory over the surface. For every fixed distance, a 3D point cloud of the surface is generated and the location of the point cloud with respect to the world coordinate system is recorded. The location of the point cloud is based on SLAM for spatial mapping. At the same time, a high-resolution photo of the corresponding area on the surface is recorded by the camera. Both the point cloud and the photo are transmitted to the host computer for processing and analysis. This information is used in a new 3D detection and image processing algorithm to find flaws in the surface like bumps or depressions. If irregular flaws are detected, the robot marks such a problematic location.
Abstract:
Disclosed herein is a robot generating a map based on multi sensors and artificial intelligence and moving based on the map, the robot according to an embodiment including a controller generating a pose graph that includes a LiDAR branch including one or more LiDAR frames, a visual branch including one or more visual frames, and a backbone including two or more frame nodes registered with any one or more of the LiDAR frames or the visual frames, and generating orodometry information that is generated while the robot is moving between the frame nodes.
Abstract:
Described herein are systems for roof scan using an unmanned aerial vehicle. For example, some methods include capturing, using an unmanned aerial vehicle, an overview image of a roof of a building from above the roof; presenting a suggested bounding polygon overlaid on the overview image to a user; determining a bounding polygon based on the suggested bounding polygon and user edits; based on the bounding polygon, determining a flight path including a sequence of poses of the unmanned aerial vehicle with respective fields of view at a fixed height that collectively cover the bounding polygon; fly the unmanned aerial vehicle to a sequence of scan poses with horizontal positions matching respective poses of the flight path and vertical positions determined to maintain a consistent distance above the roof; and scanning the roof from the sequence of scan poses to generate a three-dimensional map of the roof.
Abstract:
A remotely-controlled (RC) and/or autonomously operated inspection device, such as a ground vehicle or drone, may capture one or more sets of imaging data indicative of at least a portion of an automotive vehicle, such as all or a portion of the undercarriage. The one or more sets of imaging data may be analyzed based upon data indicative of at least one of vehicle damage or a vehicle defect being shown in the one or more sets of imaging data. Based upon the analyzing of the one or more sets of imaging data, damage to the vehicle or a defect of the vehicle may be identified. The identified damage or defect may be compared to a claimed damage or defect to determine whether the claimed damage or defect occurred.
Abstract:
A system for measuring illumination intensity comprising a casing configured to hold a ball head; a motor physically connected to the ball head configured to rotate the ball head; the ball head physically encased within the casing configured to rotate a telescoping arm; the telescoping arm extending from the ball head configured to extend from the ball head to an extended length; an illumination sensor physically connected to the telescoping arm, the illumination sensor configured to measure illumination intensity; a data processing unit positioned within the casing, the data processing unit is configured to handle functions selected from the group consisting of GPS programming, 2D and 3D virtual drawing and site schematic information, inspecting and testing plans, data storage, illumination intensity analytics programming, and combinations of the same; and a transmitter positioned on the casing configured to transmit data from the data processing unit to a main control system.
Abstract:
An inspection robot may include an inspection chassis and a drive module with magnetic wheels coupled to the inspection chassis. The drive module may further include a motor and a gear box located between the motor and a magnetic wheels. The gear box may include a flex spline cup which interacts with the ring gear. The inspection robot may further include a magnetic shielding assembly to shield the motor and an associated electromagnetic sensor from electromagnetic interference generated by the magnetic wheels.
Abstract:
Provided is a system including at least two robots. A first robot includes a chassis, a set of wheels, a wheel suspension, sensors, a processor, and a machine-readable medium for storing instructions. A camera of the first robot captures images of an environment from which the processor generates or updates a map of the environment and determines a location of items within the environment. The processor extracts features of the environment from the images and determines a location of the first robot. The processor transmits information to a processor of a second robot and determines an action of the first robot and the second robot. A smart phone application is paired with at least the first robot and is configured to receive at least one user input specifying an instruction for at least the first robot and at least one user preference.