Abstract:
Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, a system is provided that receives an input from a user of a mobile machine which indicates or describes an object in the world. In one example, the user may gesture to the object which is detected by a visual sensor. In another example, the user may verbally describe the object which is detected by an audio sensor. The system receiving the input may then determine which object near the location of the user that the user is indicating. Such a determination may include utilizing known objects near the geographic location of the user or the autonomous or mobile machine.
Abstract:
An AR system that leverages a pre-generated 3D model of the world to improve rendering of 3D graphics content for AR views of a scene, for example an AR view of the world in front of a moving vehicle. By leveraging the pre-generated 3D model, the AR system may use a variety of techniques to enhance the rendering capabilities of the system. The AR system may obtain pre-generated 3D data (e.g., 3D tiles) from a remote source (e.g., cloud-based storage), and may use this pre-generated 3D data (e.g., a combination of 3D mesh, textures, and other geometry information) to augment local data (e.g., a point cloud of data collected by vehicle sensors) to determine much more information about a scene, including information about occluded or distant regions of the scene, than is available from the local data.
Abstract:
Methods, systems and apparatus are described to provide a three-dimensional transition for a map view change. Various embodiments may display a map view. Embodiments may obtain input selecting another map view for display. Input may be obtained through the utilization of touch, auditory, or other well-known input technologies. In response to the input selecting a map view, embodiments may then display a transition animation that illustrates moving from the displayed map view to the selected map view in virtual space. Embodiments may then display the selected map view.
Abstract:
A mobile device including a touchscreen display presents an image of a three-dimensional object. The display can concurrently present a user interface element that can be in the form of a virtual button. While the device's user touches and maintains fingertip contact with the virtual button via the touchscreen, the mobile device can operate in a special mode in which physical tilting of the mobile device about physical spatial axes causes the mobile device to adjust the presentation of the image of the three-dimensional object on the display, causing the object to be rendered from different viewpoints in the virtual space that the object virtually occupies. The mobile device can detect such physical tilting based on feedback from a gyroscope and accelerometer contained within the device.
Abstract:
A mobile device including a touchscreen display can detect multiple points of fingertip contact being made against the touchscreen concurrently. The device can distinguish this multi-touch gesture from other gestures based on the duration, immobility, and concurrency of the contacts. In response to detecting such a multi-touch gesture, the device can send a multi-touch event to an application executing on the device. The application can respond to the multi-touch event in a variety of ways. For example, the application can determine a distance of a path in between points on a map that a user has concurrently touched with his fingertips. The application can display this distance to the user.
Abstract:
Methods, systems and apparatus are described to provide visual feedback of a change in map view. Various embodiments may display a map view of a map in a two-dimensional map view mode. Embodiments may obtain input indicating a change to a three-dimensional map view mode. Input may be obtained through the utilization of touch, auditory, or other well- known input technologies. Some embodiments may allow the input to request a specific display position to display. In response to the input indicating a change to a three-dimensional map view mode, embodiments may then display an animation that moves a virtual camera for the map display to different virtual camera positions to illustrate that the map view mode is changed to a three-dimensional map view mode.
Abstract:
The described embodiments provide a system for performing an action based on a change in a status of a wired or wireless network connection for the system. During operation, the system detects the change in the status of the network connection. In response to detecting the change, the system determines a state of the system. The system then performs one or more actions using the determined state.
Abstract:
Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, a system is provided that receives an input from a user of a mobile machine which indicates or describes an object in the world. In one example, the user may gesture to the object which is detected by a visual sensor. In another example, the user may verbally describe the object which is detected by an audio sensor. The system receiving the input may then determine which object near the location of the user that the user is indicating. Such a determination may include utilizing known objects near the geographic location of the user or the autonomous or mobile machine.
Abstract:
The described embodiments provide a system for performing an action based on a change in a status of a wired or wireless network connection for the system. During operation, the system detects the change in the status of the network connection. In response to detecting the change, the system determines a state of the system. The system then performs one or more actions using the determined state.
Abstract:
Techniques for performing context-sensitive actions in response to touch input are provided. A user interface of an application can be displayed. Touch input can be received in a region of the displayed user interface, and a context can be determined. A first action may be performed if the context is a first context and a second action may instead be performed if the context is a second context different from the first context. In some embodiments, an action may be performed if the context is a first context and the touch input is a first touch input, and may also be performed if the context is a second context and the touch input is a second touch input.