Abstract:
Systems and processes for selectively processing and responding to a spoken user input are provided. In one example, audio input containing a spoken user input can be received at a user device. The spoken user input can be identified from the audio input by identifying start and end-points of the spoken user input. It can be determined whether or not the spoken user input was intended for a virtual assistant based on contextual information. The determination can be made using a rule-based system or a probabilistic system. If it is determined that the spoken user input was intended for the virtual assistant, the spoken user input can be processed and an appropriate response can be generated. If it is instead determined that the spoken user input was not intended for the virtual assistant, the spoken user input can be ignored and/or no response can be generated.
Abstract:
Systems and processes for animating an avatar are provided. An example process of animating an avatar includes at an electronic device having one or more processors and memory, receiving text, determining an emotional state, and generating, using a neural network, a speech data set representing the received text and a set of parameters representing one or more movements of an avatar based on the received text and the determined emotional state.
Abstract:
The present disclosure generally relates to using voice interaction to access call functionality of a companion device. In an example process, a user utterance is received. Based on the user utterance and contextual information, the process causes a server to determine a user intent corresponding to the user utterance. The contextual information is based on a signal received from the companion device. In accordance with the user intent corresponding to an actionable intent of answering the incoming call, a command is received. Based on the command, instructions are provided to the companion device, which cause the companion device to answer the incoming call and provide audio data of the answered incoming call. Audio is outputted according to the audio data of the answered incoming call.
Abstract:
The present disclosure generally relates to using voice interaction to access call functionality of a companion device. In an example process, a user utterance is received. Based on the user utterance and contextual information, the process causes a server to determine a user intent corresponding to the user utterance. The contextual information is based on a signal received from the companion device. In accordance with the user intent corresponding to an actionable intent of answering the incoming call, a command is received. Based on the command, instructions are provided to the companion device, which cause the companion device to answer the incoming call and provide audio data of the answered incoming call. Audio is outputted according to the audio data of the answered incoming call.
Abstract:
The present disclosure generally relates to using voice interaction to access call functionality of a companion device. In an example process, a user utterance is received. Based on the user utterance and contextual information, the process causes a server to determine a user intent corresponding to the user utterance. The contextual information is based on a signal received from the companion device. In accordance with the user intent corresponding to an actionable intent of answering the incoming call, a command is received. Based on the command, instructions are provided to the companion device, which cause the companion device to answer the incoming call and provide audio data of the answered incoming call. Audio is outputted according to the audio data of the answered incoming call.