Abstract:
A robot configured to navigate a surface, the robot comprising a movement mechanism; a logical map representing data about the surface and associating locations with one or more properties observed during navigation; an initialization module configured to establish an initial pose comprising an initial location and an initial orientation; a region covering module configured to cause the robot to move so as to cover a region; an edge- following module configured to cause the robot to follow unfollowed edges; a control module configured to invoke region covering on a first region defined at least in part based at least part of the initial pose, to invoke region covering on least one additional region, to invoke edge-following, and to invoke region covering cause the mapping module to mark followed edges as followed, and cause a third region covering on regions discovered during edge-following.
Abstract:
The invention is generally related to the estimation of position and orientation of an object (201) with respect to a local or a global coordinate system using reflected light sources (204, 205) . A typical application of the method and apparatus includes estimation and tracking of the position of a mobile autonomous robot. Other applications include estimation and tracking of an object for position-aware, ubiquitous devices. Additional applications include tracking of the positions of people or pets in an indoor environment. The methods and apparatus comprise one or more optical emitters (203) , one or more optical sensors (202) , signal processing circuitry, and signal processing methods to determine the position and orientation of at least one of the optical sensors based at least in part on the detection of the signal of one or more emitted light sources (201, 205) reflected from a surface (206) .
Abstract:
The invention is related to methods and apparatus for programming and/or control of a computer system via a video camera or other imaging device. Objects in the environment, such as printed cards, can be placed within the field of view of the video camera or other imaging device. Indicia on the cards can be recognized and associated with one or more programming instructions or computer commands for control. In one embodiment, a computer system that is programmed and/or is controlled corresponds to a robot, such as a stationary robot or a mobile robot. In one embodiment, a process receives or monitors visual data (102) from a device such as a camera, recognizes indicia (104) from objects such as cards that are observed in the video images, associates the recognized indicia (106) with programming instructions, such as by, for example, reference to a data store, and arranges a computer program (108) based on the associated programming instructions.
Abstract:
The invention is related to methods and apparatus for programming and/or control of a computer system via a video camera or other imaging device. Objects in the environment, such as printed cards, can be placed within the field of view of the video camera or other imaging device. Indicia on the cards can be recognized and associated with one or more programming instructions or computer commands for control. In one embodiment, a computer system that is programmed and/or is controlled corresponds to a robot, such as a stationary robot or a mobile robot. In one embodiment, a process receives or monitors visual data (102) from a device such as a camera, recognizes indicia (104) from objects such as cards that are observed in the video images, associates the recognized indicia (106) with programming instructions, such as by, for example, reference to a data store, and arranges a computer program (108) based on the associated programming instructions.
Abstract:
Methods and computer readable media for recognizing and identifying items located on the belt of a counter and/or in a shopping cart of a store environment for the purpose of reducing/preventing bottom-of-the-basket loss, checking out the items automatically, reducing the checkout time, preventing consumer fraud, increasing revenue and replacing a conventional UPC scanning system to enhance the checking out speed. The images of the items taken by visual sensors may be analyzed to extract features using the scale-invariant feature transformation (SIFT) method. Then, the extracted features are compared to those of trained images stored in a database to find a set of matches. Based on the set of matches, the items are recognized and associated with one or more instructions, commands or actions without the need for personnel to visually see the items, such as by having to come out from behind a check out counter or peering over a check out counter.
Abstract:
The invention is related to methods and apparatus that detect motion by monitoring images from a video camera (104) mounted on a mobile robot (100), such as an autonomously navigated mobile robot (104). Examples of such robots include automated vacuum floor sweepers. Advantageously, embodiments of the invention can automatically sense a robot's motional state in a relatively reliable and cost-efficient manner. Many configurations of robots (100) are configured to include at least one video camera (104). Embodiments of the invention permit the use of a video camera (104) onboard a robot (100) to determine a motional state for the robot (100). This can advantageously permit the motional state of a robot to be determined at a fraction of the cost of additional sensors, such as a laser, an infrared, an ultrasonic, or a contact sensor.
Abstract:
The invention is related to methods and apparatus that detect motion by monitoring images from a video camera (104) mounted on a mobile robot (100), such as an autonomously navigated mobile robot (104). Examples of such robots include automated vacuum floor sweepers. Advantageously, embodiments of the invention can automatically sense a robot's motional state in a relatively reliable and cost-efficient manner. Many configurations of robots (100) are configured to include at least one video camera (104). Embodiments of the invention permit the use of a video camera (104) onboard a robot (100) to determine a motional state for the robot (100). This can advantageously permit the motional state of a robot to be determined at a fraction of the cost of additional sensors, such as a laser, an infrared, an ultrasonic, or a contact sensor.
Abstract:
Vector Field SLAM is a method for localizing a mobile robot in an unknown environment from continuous signals such as WiFi or active beacons. Disclosed is a technique for localizing a robot in relatively large and/or disparate areas. This is achieved by using and managing more signal sources for covering the larger area. One feature analyzes the complexity of Vector Field SLAM with respect to area size and number of signals and then describe an approximation that decouples the localization map in order to keep memory and run-time requirements low. A tracking method for re-localizing the robot in the areas already mapped is also disclosed. This allows to resume the robot after is has been paused or kidnapped, such as picked up and moved by a user. Embodiments of the invention can comprise commercial low-cost products including robots for the autonomous cleaning of floors.
Abstract:
Systems and methods for recognizing and identifying items located on the lower shelf of a shopping cart in a checkout lane of a retail store environment for the purpose of reducing or preventing loss or fraud and increasing the efficiency of a checkout process. The system includes one or more visual sensors that can take images of items and a computer system that receives the images from the one or more visual sensors and automatically identifies the items. The system can be trained to recognize the items using images taken of the items. The system relies on matching visual features from training images to match against features extracted from images taken at the checkout lane. Using the scale-invariant feature transformation (SIFT) method, for example, the system can compare the visual features of the images to the features stored in a database to find one or more matches, where the found one or more matches are used to identify the items. PA ILNI 192-0101 PCT SYSTEMS AND METHODS FOR MERCHANDISE CHECKOUT ABSTRACT OF THE DISCLOSURE
Abstract:
Systems and methods for automatically checking out items located on a moving conveyor belt for the purpose of increasing the efficiency of a checkout process and revenue at a point-of-sale. The system includes a conveyor subsystem for moving the items, a housing that enclosed a portion of the conveyor subsystem, a lighting subsystem that illuminates an area within the housing, visual sensors that can take images of the items including UPCs, and a checkout system that receives the images from the visual sensors and automatically identifies the items. The system may include a scale subsystem located under the conveyor subsystem to measure the weights of the items, where the weight of each item is used to check if the corresponding item is identified correctly. The system relies on matching visual features from images stored in a database to match against features extracted from images taken by the visual sensors.