Abstract:
Embodiments may be implemented by a computing device, such as a head-mountable display, in order to use a single guard phrase to enable different voice commands in different interface modes. An example device includes an audio sensor and a computing system configured to analyze audio data captured by the audio sensor to detect speech that includes a predefined guard phrase, and to operate in a plurality of different interface modes comprising at least a first and a second interface mode. During operation in the first interface mode, the computing system may initially disable one or more first-mode speech commands, and respond to detection of the guard phrase by enabling the one or more first-mode speech commands. During operation in the second interface mode, the computing system may initially disable a second-mode speech command, and to respond to the guard phrase by enabling the second-mode speech command.
Abstract:
Methods and systems for hands-free browsing in a wearable computing device are provided. A wearable computing device may provide for display a view of a first card of a plurality of cards which include respective virtual displays of content. The wearable computing device may determine a first rotation of the wearable computing device about a first axis and one or more eye gestures. Based on a combination of the first rotation and the eye gestures, the wearable computing device may provide for display the navigable menu, which may include an alternate view of the first card and at least a portion of one or more cards. Then, based on a determined second rotation of the wearable computing device about a second axis and based on a direction of the second rotation, the wearable computing device may generate a display indicative of navigation through the navigable menu.
Abstract:
Provided are systems, methods, and computer-readable media for providing recommended entities responsive to a search query based on a query-specific subset of contacts from a user's social graph. A search query is received from a user that includes an identifier identifying a subset of the user's social graph. The entities responsive to the search query are identified, and those entities having evaluations by or other associations with contacts from the identified subset are also identified. In response to the search query, those entities having associations with the contacts in the identified subset of the user's social graph are provided as recommended entities in the search results.
Abstract:
Methods, apparatus, and computer-readable media are described herein related to recognizing a look up gesture. Level-indication data from at least an accelerometer associated with a wearable computing device (WCD) can be received. The WCD can be worn by a wearer. The WCD can determine whether a head of the wearer is level based on the level-indication data. In response to determining that the head of the wearer is level, the WCD can receive lookup-indication data from at least the accelerometer. The WCD can determine whether the head of the wearer is tilted up based on the lookup-indication data. In response to determining that the head of the wearer is tilted up, the WCD can generate a gesture-recognition trigger, where the gesture-recognition trigger indicates that the head of the wearer has moved up from level.
Abstract:
Example methods and systems use multiple sensors to determine whether a speaker is speaking. Audio data in an audio-channel speech band detected by a microphone can be received. Vibration data in a vibration-channel speech band representative of vibrations detected by a sensor other than the microphone can be received. The microphone and the sensor can be associated with a head-mountable device (HMD). It is determined whether the audio data is causally related to the vibration data. If the audio data and the vibration data are causally related, an indication can be generated that the audio data contains HMD-wearer speech. Causally related audio and vibration data can be used to increase accuracy of text transcription of the HMD-wearer speech. If the audio data and the vibration data are not causally related, an indication can be generated that the audio data does not contain HMD-wearer speech.
Abstract:
The present description discloses systems and methods for changing the state of a device. One embodiment may include a device configured to provide a device in a first state, receive a signal indicative of first angular data of the device, and compare the first angular data to a first threshold. The device may then execute instructions to initiate a timer when the first angular data is greater than the first threshold, receive a signal indicative of a second angular data of the device, and compare the second angular data to a second threshold. When the second angular data is less than the second threshold and the time passed is within a pre-determined time period, the device may execute instructions to transition the device to a second state.
Abstract:
Methods and systems are described herein related to enabling service providers to address voice-activated commands. An example method may involve: receiving a first utterance on a computing device, where the first utterance includes a first command; selecting a service action corresponding to the first command; determining a selected service provider for the selected service action, where the selected service provider is selected from a plurality of service providers; and sending a service fulfillment request to the selected service provider to execute the selected service action.
Abstract:
Methods, apparatus, and computer-readable media are described herein related to a user interface (UI) that can be implemented on a head-mountable device (HMD). The UI can include a voice-navigable UI. The voice-navigable UI can include a voice navigable menu that includes one or more menu items comprising an original menu item and an added command menu item. The original menu item can be associated with one or more original commands, and the added menu item can be associated with one or more added commands, including a first added command. The interface can also present a first visible menu that includes at least a portion of the voice navigable menu. Responsive to a first utterance comprising the first added command, the interface can invoke the first added command. In some embodiments, the interface can display a second visible menu, wherein the first added command can be displayed above other menu items.
Abstract:
Methods, apparatus, and computer-readable media are described herein related to a user interface (UI) that can be implemented on a head-mountable device (HMD). The UI can include a voice-navigable UI. The voice-navigable UI can include a voice navigable menu that includes one or more menu items comprising an original menu item and an added command menu item. The original menu item can be associated with one or more original commands, and the added menu item can be associated with one or more added commands, including a first added command. The interface can also present a first visible menu that includes at least a portion of the voice navigable menu. Responsive to a first utterance comprising the first added command, the interface can invoke the first added command. In some embodiments, the interface can display a second visible menu, wherein the first added command can be displayed above other menu items.
Abstract:
Provided are methods and computer-readable media for providing recommended entities based on a user's external social graph, such as asymmetric social graph of a social networking service. In some embodiments, entities responsive to a search query or other request may be obtained. Each entity may be evaluated to determine if the entity is associated with a contact from a user's social graph. The association may include an evaluation (e.g., a rating, review, other evaluation or combination thereof) of the entity by the contact. Additionally, the contacts having associations with an entity may be ranked based on a relationship score with a user. The entities having associations with the contacts from a user's social graph may be provided as recommended entities to the user, and the association may be annotated to the recommended entity for viewing by the user.