Abstract:
An autonomous vehicle comprises at least one image sensor to provide measurements of landmark position for a plurality of landmarks; and processing functionality to estimate the position of the plurality of landmarks in a global frame and in the autonomous vehicle's frame, and to estimate the kinematic state of the autonomous vehicle in a global frame based, at least in part, on the measurements of landmark position from the at least one image sensor. The processing functionality is further operable to calculate errors in the estimated positions of the plurality of landmarks in the global frame and in the estimate of the kinematic state of the autonomous vehicle in the global frame by using a plurality of unit projection vectors between the estimated positions of the plurality landmarks in the autonomous vehicle's frame and a plurality of unit projection vectors between the estimated positions of the plurality of landmarks in the global frame.
Abstract:
A multi-agent autonomous system for exploration of hazardous or inaccessible locations. The multi-agent autonomous system includes simple surface-based agents or craft controlled by an airborne tracking and command system. The airborne tracking and command system includes an instrument suite used to image an operational area and any craft deployed within the operational area. The image data is used to identify the craft, targets for exploration, and obstacles in the operational area. The tracking and command system determines paths for the surface-based craft using the identified targets and obstacles and commands the craft using simple movement commands to move through the operational area to the targets while avoiding the obstacles. Each craft includes its own instrument suite to collect information about the operational area that is transmitted back to the tracking and command system. The tracking and command system may be further coupled to a satellite system to provide additional image information about the operational area and provide operational and location commands to the tracking and command system.
Abstract:
An autonomous vehicle comprises at least one image sensor to provide measurements of landmark position for a plurality of landmarks; and processing functionality to estimate the position of the plurality of landmarks in a global frame and in the autonomous vehicle's frame, and to estimate the kinematic state of the autonomous vehicle in a global frame based, at least in part, on the measurements of landmark position from the at least one image sensor. The processing functionality is further operable to calculate errors in the estimated positions of the plurality of landmarks in the global frame and in the estimate of the kinematic state of the autonomous vehicle in the global frame by using a plurality of unit projection vectors between the estimated positions of the plurality landmarks in the autonomous vehicle's frame and a plurality of unit projection vectors between the estimated positions of the plurality of landmarks in the global frame.
Abstract:
A robot for accomplishing a mission in a physical environment includes a body; and an operating system coupled to the body and configured to operate the body. The operating system is divided into a plurality of partitions, which includes a simulation partition configured to receive inputs and simulate the mission in a simulated environment corresponding to the physical environment based on the inputs to produce a simulated result, and a mission partition configured to receive the simulated result and determine actions to accomplish the mission based on the simulated result.
Abstract:
Various examples are provided for object identification and tracking, traverse-optimization and/or trajectory optimization. In one example, a method includes determining a terrain map including at least one associated terrain type; and determining a recommended traverse along the terrain map based upon at least one defined constraint associated with the at least one associated terrain type. In another example, a method includes determining a transformation operator corresponding to a reference frame based upon at least one fiducial marker in a captured image comprising a tracked object; converting the captured image to a standardized image based upon the transformation operator, the standardized image corresponding to the reference frame; and determining a current position of the tracked object from the standardized image.
Abstract:
A robotic system includes a plurality of robotic elements, each having at least one processing component, at least one memory component, and an I/O interface; and a virtual backplane coupling the plurality of robotic elements.
Abstract:
A method of controlling a plurality of crafts in an operational area includes providing a command system, a first craft in the operational area coupled to the command system, and a second craft in the operational area coupled to the command system. The method further includes determining a first desired destination and a first trajectory to the first desired destination, sending a first command from the command system to the first craft to move a first distance along the first trajectory, and moving the first craft according to the first command. A second desired destination and a second trajectory to the second desired destination are determined and a second command is sent from the command system to the second craft to move a second distance along the second trajectory.
Abstract:
A robotic system includes a plurality of robotic elements, each having at least one processing component, at least one memory component, and an I/O interface; and a virtual backplane coupling the plurality of robotic elements.
Abstract:
An autonomous vehicle comprises at least one image sensor to provide measurements of landmark position for a plurality of landmarks; and processing functionality to estimate the position of the plurality of landmarks in a global frame and in the autonomous vehicle's frame, and to estimate the kinematic state of the autonomous vehicle in a global frame based, at least in part, on the measurements of landmark position from the at least one image sensor. The processing functionality is further operable to calculate errors in the estimated positions of the plurality of landmarks in the global frame and in the estimate of the kinematic state of the autonomous vehicle in the global frame by using a plurality of unit projection vectors between the estimated positions of the plurality landmarks in the autonomous vehicle's frame and a plurality of unit projection vectors between the estimated positions of the plurality of landmarks in the global frame.
Abstract:
A multi-agent autonomous system for exploration of hazardous or inaccessible locations. The multi-agent autonomous system includes simple surface-based agents or craft controlled by an airborne tracking and command system. The airborne tracking and command system includes an instrument suite used to image an operational area and any craft deployed within the operational area. The image data is used to identify the craft, targets for exploration, and obstacles in the operational area. The tracking and command system determines paths for the surface-based craft using the identified targets and obstacles and commands the craft using simple movement commands to move through the operational area to the targets while avoiding the obstacles. Each craft includes its own instrument suite to collect information about the operational area that is transmitted back to the tracking and command system. The tracking and command system may be further coupled to a satellite system to provide additional image information about the operational area and provide operational and location commands to the tracking and command system.