Abstract:
A human interface robot (100) that includes a controller (500), a camera (320, 320a, 320b, 450, 450a, 450b) in communication with the controller, and a display (310, 310a, 310b, 312) in communication with the controller. The controller displays received image data on the display as an image (1602), identifies at least one shape (1610, 1610a, 1610b) in the image, and displays a shape specific label (1620) on the image at least near the shape.
Abstract:
A mobile robot includes a processor connected to a memory and a wireless network circuit, for executing routines stored in the memory and commands generated by the routines and received via the wireless network circuit. The processor drives the mobile robot to a multiplicity of accessible two dimensional locations within a household, and commands an end effector, including at least one motorized actuator, to perform mechanical work in the household. A plurality of routines include a first routine which monitors a wireless local network and detects a presence of a network entity on the wireless local network, a second routine which receives a signal from a sensor detecting an action state of one of the network entities, the action state changeable between waiting and active, and a third routine which commands the end effector to change state of performing mechanical work based on the presence and on the action state.
Abstract:
A human interface robot (100) that includes a controller (500), a camera (320, 320a, 320b, 450, 450a, 450b) in communication with the controller, and a display (310, 310a, 310b, 312) in communication with the controller. The controller displays received image data on the display as an image (1602), identifies at least one shape (1610, 1610a, 1610b) in the image, and displays a shape specific label (1620) on the image at least near the shape.
Abstract:
A mobile robot includes a processor connected to a memory and a wireless network circuit, for executing routines stored in the memory and commands generated by the routines and received via the wireless network circuit. The processor drives the mobile robot to a multiplicity of accessible two dimensional locations within a household, and commands an end effector, including at least one motorized actuator, to perform mechanical work in the household. A plurality of routines include a first routine which monitors a wireless local network and detects a presence of a network entity on the wireless local network, a second routine which receives a signal from a sensor detecting an action state of one of the network entities, the action state changeable between waiting and active, and a third routine which commands the end effector to change state of performing mechanical work based on the presence and on the action state.
Abstract:
A mobile robot includes a processor connected to a memory and a wireless network circuit, for executing routines stored in the memory and commands generated by the routines and received via the wireless network circuit. The processor drives the mobile robot to a multiplicity of accessible two dimensional locations within a household, and commands an end effector, including at least one motorized actuator, to perform mechanical work in the household. A plurality of routines include a first routine which monitors a wireless local network and detects a presence of a network entity on the wireless local network, a second routine which receives a signal from a sensor detecting an action state of one of the network entities, the action state changeable between waiting and active, and a third routine which commands the end effector to change state of performing mechanical work based on the presence and on the action state.
Abstract:
A human interface robot (100) that includes a controller (500), a camera (320, 320a, 320b, 450, 450a, 450b) in communication with the controller, and a display (310, 310a, 310b, 312) in communication with the controller. The controller displays received image data on the display as an image (1602), identifies at least one shape (1610, 1610a, 1610b) in the image, and displays a shape specific label (1620) on the image at least near the shape.
Abstract:
A computer-implemented method for receiving user commands for a remote cleaning robot and sending the user commands to the remote cleaning robot, the remote cleaning robot including a drive motor and a cleaning motor, includes displaying a user interface including a control area, and within the control area: a user-manipulable launch control group including a plurality of control elements, the launch control group having a deferred launch control state and an immediate launch control state; at least one user-manipulable cleaning strategy control element having a primary cleaning strategy control state and an alternative cleaning strategy control state; and a physical recall control group including a plurality of control elements, the physical recall control group having an immediate recall control state and a remote audible locator control state. The method further includes: receiving user input via the user-manipulable control elements; responsive to the user inputs, displaying simultaneously within the same control area a real-time robot state reflecting a unique combination of control states; and commanding the remote cleaning robot to actuate the drive motor and cleaning motor to clean a surface based on the received input and unique combination of control states.
Abstract:
A human interface robot (100) that includes a controller (500), a camera (320, 320a, 320b, 450, 450a, 450b) in communication with the controller, and a display (310, 310a, 310b, 312) in communication with the controller. The controller displays received image data on the display as an image (1602), identifies at least one shape (1610, 1610a, 1610b) in the image, and displays a shape specific label (1620) on the image at least near the shape.