Abstract:
An electronic controller defining an autonomous mobile device a self-location estimation unit to estimate a self-location based on a local map that is created according to distance/angle information relative to an object in the vicinity and the travel distance of an omni wheel, an environmental map creation unit to create an environmental map of a mobile area based on the self-location and the local map during the guided travel with using a joystick, a registration switch to register the self-location of the autonomous mobile device as the position coordinate of the setting point when the autonomous mobile device reaches a predetermined setting point during the guided travel, a storage unit to store the environmental map and the setting point, a route planning unit to plan the travel route by using the setting point on the environmental map stored in the storage unit, and a travel control unit to control the autonomous mobile device to autonomously travel along the travel route.
Abstract:
A method for mobile autonomous updating of GIS maps is provided. In the method, an autonomous mobile data collecting platform is provided with a map identifying one or more GIS features. The platform has at least one data collecting sensor for collecting data for at least one of the GIS features and patrols at least a portion of a region included in the map while updating its GIS position as it patrols. The autonomous mobile data collecting platform applies the at least one data collecting sensor during patrolling to collect data for at least one of the GIS features and updates the GIS map to reflect differential data collected for at least one GIS feature.
Abstract:
In one embodiment, an autonomously navigated mobile platform includes a support frame, a projector supported by the frame, a sensor supported by the frame, a memory including a plurality of program instructions stored therein for generating an encoded signal using a phase shifting algorithm, emitting the encoded signal with the projector, detecting the emitted signal with the sensor after the emitted signal is reflected by a detected body, associating the detected signal with the emitted signal, identifying an x-axis dimension, a y-axis dimension, and a z-axis dimension of the detected body, and one or more of a range and a bearing to the detected body, based upon the associated signal, identifying a present location of the mobile platform, navigating the mobile platform based upon the identified location, and a processor operably connected to the memory, to the sensor, and to the projector for executing the program instructions.
Abstract:
The invention is related to methods and apparatus that use a visual sensor and dead reckoning sensors to process Simultaneous Localization and Mapping (SLAM). These techniques can be used in robot navigation. Advantageously, such visual techniques can be used to autonomously generate and update a map. Unlike with laser rangefinders, the visual techniques are economically practical in a wide range of applications and can be used in relatively dynamic environments, such as environments in which people move. One embodiment further advantageously uses multiple particles to maintain multiple hypotheses with respect to localization and mapping. Further advantageously, one embodiment maintains the particles in a relatively computationally-efficient manner, thereby permitting the SLAM processes to be performed in software using relatively inexpensive microprocessor-based computer systems.
Abstract:
The disclosed terrain model is a generative, probabilistic approach to modeling terrain that exploits the 3D spatial structure inherent in outdoor domains and an array of noisy but abundant sensor data to simultaneously estimate ground height, vegetation height and classify obstacles and other areas of interest, even in dense non-penetrable vegetation. Joint inference of ground height, class height and class identity over the whole model results in more accurate estimation of each quantity. Vertical spatial constraints are imposed on voxels within a column via a hidden semi-Markov model. Horizontal spatial constraints are enforced on neighboring columns of voxels via two interacting Markov random fields and a latent variable. Because of the rules governing abstracts, this abstract should not be used to construe the claims.
Abstract:
A system and method to record the surroundings for a movable device. The system has at least one sensor for the visual recording of the surroundings, as well as respectively at least one sensor for recording the direction of motion and the orientation of the device, the system being developed to process information provided by the sensors.
Abstract:
The invention is related to methods and apparatus that use a visual sensor and dead reckoning sensors to process Simultaneous Localization and Mapping (SLAM). These techniques can be used in robot navigation. Advantageously, such visual techniques can be used to autonomously generate and update a map. Unlike with laser rangefinders, the visual techniques are economically practical in a wide range of applications and can be used in relatively dynamic environments, such as environments in which people move. One embodiment further advantageously uses multiple particles to maintain multiple hypotheses with respect to localization and mapping. Further advantageously, one embodiment maintains the particles in a relatively computationally-efficient manner, thereby permitting the SLAM processes to be performed in software using relatively inexpensive microprocessor-based computer systems.
Abstract:
A remotely controlled roving vehicle comprises a claw assembly and a video camera. The claw assembly includes a main body and a plurality of grasping member. A first end of each one of the grasping members is movably mounted on the main body for enabling a second end of each one of the grasping members to be moved between an open position and a closed position. The video camera includes an image-receiving portion that is mounted on the main body. A second end of each one of the grasping members is within a field of view of the video camera. A lens of the video camera is centrally located with respect to the first ends of the grasping members. The first end of each one of the grasping members is pivotally mounted on the main body. The main body is rotatable about a longitudinal axis thereof.
Abstract:
A landmark detection apparatus and method, the apparatus including a first detection unit that generates N first sample blocks in a first sampling region using a first weighted sampling method according to a first degree of dispersion, and that performs a first landmark detection by comparing a feature of each first sample block a feature of a landmark model, where the first sample region is set to the entirety of a current frame image; and a second detection unit that generates N second sample blocks in a second sampling region using a second weighted sampling method according to a second degree of dispersion, and that performs a second landmark detection by comparing a feature of each second sample block the feature of the landmark model, where the second sampling region is set to an area less than the entirety of the current frame image.
Abstract:
In a robot system constructed by a superior controller and a robot, it is necessary to carry out a high-speed computation in a system which simultaneously generate a map together with identifying a posture of the robot, there is a problem that the robot system becomes expensive because a computing load becomes enlarged, and it is an object to reduce the computing load. In order to achieve the object, there is provided a robot system constructed by a controller having a map data and a mobile robot, in which the robot is provided with a distance sensor measuring a plurality of distances with respect to a peripheral object, and an identifying apparatus identifying a position and an angle of the robot by collating with the map data, and the controller is provided with a map generating apparatus generating or updating the map data on the basis of the position and the angle of the robot, and the measured distance with respect to the object. Accordingly, it is possible to reduce the computing load of the controller and the robot, and it is possible to achieve a comparatively inexpensive robot system.