Abstract:
Techniques are described for shared audio functionality between multiple computing devices, based on grouping the multiple computing devices into a device set. The devices may provide audio output, audio input, or both audio output and input. The devices may discover each other via transmitted radio signals, and the devices may be organized into one or more device sets based on location, supported functions, or other criteria. The shared audio functionality may enable a voice command received at one device in the device set to be employed for controlling audio output or other operations of other device(s) in the device set. Shared audio functionality between devices in a device set may also enable synchronized audio output through using multiple devices in the device set.
Abstract:
Voice-controlled devices that include one or more speakers for outputting audio. In some instances, the device includes at least one speaker within a cylindrical housing, with the speaker aimed or pointed away from a microphone coupled to the housing. For instance, if the microphone resides at or near the top of the cylindrical housing, then the speaker may point downwards along the longitudinal axis of the housing and away from the microphone. By pointing the speaker away from the microphone, the microphone will receive less sound from the speaker than if the speaker were pointed toward the microphone). Because the voice-controlled device may perform speech recognition on audio signals generated by the microphone, less sound from the speaker represented in the audio signal may result in more accurate speech recognition, and/or a lesser need to perform acoustic echo cancelation (AEC) on the generated audio signals.
Abstract:
Devices that include light assemblies for providing visual feedback to users that operate the electronic devices. In some instances, the devices comprise voice-controlled devices and, therefore, include one or more microphones for receiving audible commands from the users. After receiving a command, for instance, one such voice-controlled device may cause a corresponding light assembly of the device to illuminate in some predefined manner. This illumination may indicate to the user that device has received the command. In other instances, the devices may illuminate the lighting assembly for an array of other purposes. For instance, one such device may illuminate the corresponding light assembly when powering on or off, playing music, outputting information to a user (e.g., via a speaker or display), or the like.
Abstract:
A distributed voice controlled system has a primary assistant and at least one secondary assistant. The primary assistant has a housing to hold one or more microphones, one or more speakers, and various computing components. The secondary assistant is similar in structure, but is void of speakers. The voice controlled assistants perform transactions and other functions primarily based on verbal interactions with a user. The assistants within the system are coordinated and synchronized to perform acoustic echo cancellation, selection of a best audio input from among the assistants, and distributed processing.
Abstract:
Voice-controlled devices that include one or more speakers for outputting audio. In some instances, the device includes at least one speaker within a cylindrical housing, with the speaker aimed or pointed away from a microphone coupled to the housing. For instance, if the microphone resides at or near the top of the cylindrical housing, then the speaker may point downwards along the longitudinal axis of the housing and away from the microphone. By pointing the speaker away from the microphone, the microphone will receive less sound from the speaker than if the speaker were pointed toward the microphone). Because the voice-controlled device may perform speech recognition on audio signals generated by the microphone, less sound from the speaker represented in the audio signal may result in more accurate speech recognition, and/or a lesser need to perform acoustic echo cancelation (AEC) on the generated audio signals.
Abstract:
Techniques are described for shared audio functionality between multiple computing devices, based on identifying computing devices in a device set. The devices may provide audio output, audio input, or both audio output and input. The devices may be organized into one or more device sets based on location, supported functions, or other criteria. The shared audio functionality may enable a voice command received at one device to be employed for controlling audio output or other operations of other device(s) in the device set. Shared audio functionality between devices may also enable synchronized audio output through using multiple devices.
Abstract:
A portable audio input/output device may include an assembly enclosure that contains electrical and mechanical components of the device. A substantially cylindrical frame may encircle the assembly enclosure and may be surrounded by a tube of seamless material. A top end of the tube of seamless material may fold over a top end of the substantially cylindrical frame, and a bottom end of the tube of seamless material may fold over a bottom end of the substantially cylindrical frame. A cover assembly may couple to a top end of the assembly enclosure and secure the top end of the seamless fabric. A charging foot may be coupled to a bottom end of the assembly enclosure and secure the bottom end of the seamless fabric.
Abstract:
A voice interaction architecture has a hands-free, electronic voice controlled assistant that permits users to verbally request information from cloud services. Since the assistant relies primarily, if not exclusively, on voice interactions, configuring the assistant for the first time may pose a challenge, particularly to a novice user who is unfamiliar with network settings (such as wife access keys). The architecture supports several approaches to configuring the voice controlled assistant that may be accomplished without much or any user input, thereby promoting a positive out-of-box experience for the user. More particularly, these approaches involve use of audible or optical signals to configure the voice controlled assistant.
Abstract:
Techniques are described for grouping multiple computing devices into a device set to enable shared audio functionality, or other types of shared functionality, between the devices in the device set. The devices may provide audio output, audio input, or both audio output and input. The devices may discover each other via transmitted radio signals, and the devices may be organized into one or more device sets based on location, supported functions, or other criteria. A voice command received at one device in the device set may be employed to control operations of other device(s) in the device set. Shared audio functionality between devices in a device set may also enable synchronized audio output through using multiple devices in the device set.