Abstract:
Systems and methods for providing an intuitive preview of upcoming navigational instructions are provided. One example method for providing navigational instruction includes obtaining, by one or more computing devices, navigational information describing a sequence of navigational maneuvers associated with a route. The method includes determining, by the one or more computing devices, a distance between each navigational maneuver and the previous sequential navigational maneuver. The method includes displaying, by the one or more computing devices, a user interface providing a sequence of indicators respectively representing the sequence of navigational maneuvers. A space between each indicator and the previous sequential indicator is proportional to the distance between the navigational maneuver represented by such indicator and the navigational maneuver represented by the previous sequential indicator.
Abstract:
Systems and methods for providing an intuitive preview of upcoming navigational instructions are provided. One example method for providing navigational instruction includes obtaining, by one or more computing devices, navigational information describing a sequence of navigational maneuvers associated with a route. The method includes determining, by the one or more computing devices, a distance between each navigational maneuver and the previous sequential navigational maneuver. The method includes displaying, by the one or more computing devices, a user interface providing a sequence of indicators respectively representing the sequence of navigational maneuvers. A space between each indicator and the previous sequential indicator is proportional to the distance between the navigational maneuver represented by such indicator and the navigational maneuver represented by the previous sequential indicator.
Abstract:
Disclosed are techniques for detecting a gesture performed at a first distance and at a second distance. A first aspect of a target may be manipulated according to the first gesture at the first distance and a second aspect of the target may be manipulated according to the first gesture at the second distance.
Abstract:
A computing device is described that outputs, for display, an initial speech recognition graphical user interface (GUI) having at least one element. The computing device receives audio data and determines, based on the audio data, a voice-initiated action. Responsive to determining the voice-initiated action, the computing device outputs, for display, an updated speech recognition GUI having an animation of a change in a position of the at least one element to indicate that the voice-initiated action has been determined.