Abstract:
Systems and processes for operating an intelligent automated assistant to perform intelligent list reading are provided. In accordance with one example, a method includes, at an electronic device having one or more processors, receiving a first user input of a first input type, the first user input including a plurality of words; displaying, on the touch-sensitive display, the plurality of words; receiving a second user input of a second input type indicating a selection of a word of the plurality of words, the second input type different than the first input type; receiving a third user input; modifying the selected word based on the third user input to provide a modified one or more words; and displaying, on the touch-sensitive display, the modified one or more words.
Abstract:
In some implementations, a mobile device can adjust an alarm setting based on the sleep onset latency duration detected for a user of the mobile device. For example, sleep onset latency can be the amount of time it takes for the user to fall asleep after the user attempts to go to sleep (e.g., goes to bed). The mobile device can determine when the user intends or attempts to go to sleep based on detected sleep ritual activities. Sleep ritual activities can include those activities a user performs in preparation for sleep. The mobile device can determine when the user is asleep based on detected sleep signals (e.g., biometric data, sounds, etc.). In some implementations, the mobile device can determine recurring patterns of long or short sleep onset latency and present suggestions that might help the user sleep better or feel more rested.
Abstract:
The method is performed at an electronic device with one or more processors and memory storing one or more programs for execution by the one or more processors. A first speech input including at least one word is received. A first phonetic representation of the at least one word is determined, the first phonetic representation comprising a first set of phonemes selected from a speech recognition phonetic alphabet. The first set of phonemes is mapped to a second set of phonemes to generate a second phonetic representation, where the second set of phonemes is selected from a speech synthesis phonetic alphabet. The second phonetic representation is stored in association with a text string corresponding to the at least one word.
Abstract:
In some implementations, a computing device can confirm a sleep determination for a user based on secondary indicia of user activity. For example, the computing device can be a user's primary computing device. The primary device can predict and/or determine when the user is sleeping based on the user's use (e.g., primary indicia), or lack of use, of the primary device. After the primary device determines that the user is sleeping, the primary device can confirm that the user is asleep based on secondary indicia of user activity. In some implementations, the secondary indicia can include user activity reported to the primary computing device by other secondary computing devices (e.g., a second user device, a household appliance, etc.). In some implementations, the secondary indicia can include user activity detected by sensors of the primary computing device (e.g., sound, light, movement, etc.).
Abstract:
A user request is received from a mobile client device, where the user request includes at least a speech input and seeks an informational answer or performance of a task. A failure to provide a satisfactory response to the user request is detected. In response to detection of the failure, information relevant to the user request is crowd-sourced by querying one or more crowd sourcing information sources. One or more answers are received from the crowd sourcing information sources, and the response to the user request is generated based on at least one of the one or more answers received from the one or more crowd sourcing information sources.
Abstract:
The method includes automatically, without user input and without regard to whether a digital assistant application has been separately invoked by a user, determining that the electronic device is in a vehicle. In some implementations, determining that the electronic device is in a vehicle comprises detecting that the electronic device is in communication with the vehicle (e.g., via a wired or wireless communication techniques and/or protocols). The method also includes, responsive to the determining, invoking a listening mode of a virtual assistant implemented by the electronic device. In some implementations, the method also includes limiting the ability of a user to view visual output presented by the electronic device, provide typed input to the electronic device, and the like.
Abstract:
Systems and processes for operating an intelligent automated assistant to provide extension of digital assistant services are provided. An example method includes, at an electronic device having one or more processors, receiving, from a first user, a first speech input representing a user request. The method further includes obtaining an identity of the first user; and in accordance with the user identity, providing a representation of the user request to at least one of a second electronic device or a third electronic device. The method further includes receiving, based on a determination of whether the second electronic device or the third electronic device, or both, is to provide the response to the first electronic device, the response to the user request from the second electronic device or the third electronic device. The method further includes providing a representation of the response to the first user.
Abstract:
The intelligent automated assistant system engages with the user in an integrated, conversational manner using natural language dialog, and invokes external services when appropriate to obtain information or perform various actions. The system can be implemented using any of a number of different platforms, such as the web, email, smartphone, and the like, or any combination thereof. In one embodiment, the system is based on sets of interrelated domains and tasks, and employs additional functionally powered by external services with which the system can interact.
Abstract:
Systems and processes for operating an intelligent automated assistant to perform intelligent list reading are provided. In accordance with one example, a method includes, at an electronic device having one or more processors, receiving a first user input of a first input type, the first user input including a plurality of words; displaying, on the touch-sensitive display, the plurality of words; receiving a second user input of a second input type indicating a selection of a word of the plurality of words, the second input type different than the first input type; receiving a third user input; modifying the selected word based on the third user input to provide a modified one or more words; and displaying, on the touch-sensitive display, the modified one or more words.
Abstract:
Systems and processes for operating an intelligent automated assistant in a messaging environment are provided. In one example process, a graphical user interface (GUI) having a plurality of previous messages between a user of the electronic device and the digital assistant can be displayed on a display. The plurality of previous messages can be presented in a conversational view. User input can be received and in response to receiving the user input, the user input can be displayed as a first message in the GUI. A contextual state of the electronic device corresponding to the displayed user input can be stored. The process can cause an action to be performed in accordance with a user intent derived from the user input. A response based on the action can be displayed as a second message in the GUI.