Abstract:
During GPS-denied/restricted navigation, images proximate a platform device are captured using a camera, and corresponding motion measurements of the platform device are captured using an IMU device. Features of a current frame of the images captured are extracted. Extracted features are matched and feature information between consecutive frames is tracked. The extracted features are compared to previously stored, geo-referenced visual features from a plurality of platform devices. If one of the extracted features does not match a geo-referenced visual feature, a pose is determined for the platform device using IMU measurements propagated from a previous pose and relative motion information between consecutive frames, which is determined using the tracked feature information. If at least one of the extracted features matches a geo-referenced visual feature, a pose is determined for the platform device using location information associated with the matched, geo-referenced visual feature and relative motion information between consecutive frames.
Abstract:
A method for providing a real time, three-dimensional (3D) navigational map for platforms includes integrating at least two sources of multi-modal and multi-dimensional platform sensor information to produce a more accurate 3D navigational map. The method receives both a 3D point cloud from a first sensor on a platform with a first modality and a 2D image from a second sensor on the platform with a second modality different from the first modality, generates a semantic label and a semantic label uncertainty associated with a first space point in the 3D point cloud, generates a semantic label and a semantic label uncertainty associated with a second space point in the 2D image, and fuses the first space semantic label and the first space semantic uncertainty with the second space semantic label and the second space semantic label uncertainty to create fused 3D spatial information to enhance the 3D navigational map.
Abstract:
Biofeedback virtual reality sleep assistant technologies monitor one or more physiological parameters while presenting an immersive environment. The presentation of the immersive environment changes over time in response to changes in the values of the physiological parameters. The changes in the presentation of the immersive environment are configured using biofeedback technology and are designed to promote sleep.
Abstract:
A system and method for generating a mixed-reality environment is provided. The system and method provides a user-worn sub-system communicatively connected to a synthetic object computer module. The user-worn sub-system may utilize a plurality of user-worn sensors to capture and process data regarding a user's pose and location. The synthetic object computer module may generate and provide to the user-worn sub-system synthetic objects based information defining a user's real world life scene or environment indicating a user's pose and location. The synthetic objects may then be rendered on a user-worn display, thereby inserting the synthetic objects into a user's field of view. Rendering the synthetic objects on the user-worn display creates the virtual effect for the user that the synthetic objects are present in the real world.
Abstract:
Biofeedback virtual reality sleep assistant technologies monitor one or more physiological parameters while presenting an immersive environment. The presentation of the immersive environment changes over time in response to changes in the values of the physiological parameters. The changes in the presentation of the immersive environment are configured using biofeedback technology and are designed to promote sleep.
Abstract:
In general, techniques are described for coordinating actions of a plurality of agents or subsystems using a machine learning system that implements a Capability Graph Network (CGN). In an example, a method includes generating a control policy model comprising a plurality of nodes and a plurality of edges interconnecting the plurality of nodes, wherein the plurality of nodes represents a plurality of agents or subsystems and the plurality of edges represent information exchange between the plurality of agents or subsystems; and encoding agent behavior control policy within the control policy model for executing to coordinate a plurality of the actions of the plurality of agents or subsystems.
Abstract:
A method, apparatus and system for artificial intelligence-based HDRL planning and control for coordinating a team of platforms includes implementing a global planning layer for determining a collective goal and determining, by applying at least one machine learning process, at least one respective platform goal to be achieved by at least one platform, implementing a platform planning layer for determining, by applying at least one machine learning process, at least one respective action to be performed by the at least one of the platforms to achieve the respective platform goal, and implementing a platform control layer for determining at least one respective function to be performed by the at least one of the platforms. In the method, apparatus and system despite the fact that information is shared between at least two of the layers, the global planning layer, the platform planning layer, and the platform control layer are trained separately.
Abstract:
A method, apparatus and system for determining change in pose of a mobile device include determining from first ranging information received at a first and a second receiver on the mobile device from a stationary node during a first time instance, a distance from the stationary node to the first receiver and the second receiver, determining from second ranging information received at the first receiver and the second receiver from the stationary node during a second time instance, a distance from the stationary node to the first receiver and second receiver, and determining from the determined distances during the first time instance and the second time instance, how far and in which direction the first receiver and the second receiver moved between the first time instance and the second time instance to determine a change in pose of the mobile device, where a position of the stationary node is unknown.
Abstract:
Computer aided inspection systems (CAIS) and method for inspection, error analysis and comparison of structures are presented herein. In some embodiments, a CAIS may include a SLAM system configured to determine real-world global localization information of a user in relation to a structure being inspected using information obtained from a first sensor package, a model alignment system configured to: use the determined global localization information to index into a corresponding location in a 3D computer model of the structure being inspected; and align observations and/or information obtained from the first sensor package to the local area of the model 3D computer model of the structure extracted; a second sensor package configured to obtain fine level measurements of the structure; and a model recognition system configured to compare the fine level measurements and information obtained about the structure from the second sensor package to the 3D computer model.
Abstract:
Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more images of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.