Abstract:
A device for communication includes one or more processors configured to receive, during an online meeting, a speech audio stream representing speech of a first user. The one or more processors are also configured to receive a text stream representing the speech of the first user. The one or more processors are further configured to selectively generate an output based on the text stream in response to an interruption in the speech audio stream.
Abstract:
An example device includes a memory device, and a processor coupled to the memory device. The memory is configured to store audio spatial metadata associated with a soundfield and video data. The processor is configured to identify one or more foreground audio objects of the soundfield using the audio spatial metadata stored to the memory device, and to select, based on the identified one or more foreground audio objects, one or more viewports associated with the video data. Display hardware coupled to the processor and the memory device is configured to output a portion of the video data being associated with the one or more viewports selected by the processor.
Abstract:
A device includes one or more processors configured to, during a call, receive a sequence of audio frames from a first device. The one or more processors are configured to, in response to determining that no audio frame of the sequence has been received for a threshold duration since a last received audio frame of the sequence, initiate transmission of a frame loss indication to the first device. The one or more processors are also configured to, responsive to the frame loss indication, receive a set of audio frames of the sequence and an indication of a second playback speed from the first device. The one or more processors are configured to initiate playback, via a speaker, of the set of audio frames based on the second playback speed. The second playback speed is greater than a first playback speed of a first set of audio frames of the sequence.
Abstract:
A device includes one or more processors configured to obtain audio signals representing sound captured by at least three microphones and determine spatial audio data based on the audio signals. The one or more processors are further configured to determine a metric indicative of wind noise in the audio signals. The metric is based on a comparison of a first value and a second value. The first value corresponds to an aggregate signal based on the spatial audio data, and the second value corresponds to a differential signal based on the spatial audio data.
Abstract:
In a particular aspect, a multimedia device includes one or more sensors configured to generate first sensor data and second sensor data. The first sensor data is indicative of a first position at a first time and the second sensor data is indicative of a second position at a second time. The multimedia device further includes a processor coupled to the one or more sensors. The processor is configured to generate a first version of a spatialized audio signal, determine a cumulative value based on an offset, the first position, and the second position, and generate a second version of the spatialized audio signal based on the cumulative value.
Abstract:
Methods, systems, computer-readable media, and apparatuses for manipulating a soundfield are presented. Some configurations include receiving a bitstream that comprises metadata and a soundfield description; parsing the metadata to obtain an effect identifier and at least one effect parameter value; and applying, to the soundfield description, an effect identified by the effect identifier. The applying may include using the at least one effect parameter value to apply the identified effect to the soundfield description.
Abstract:
The techniques disclosed herein include a first device including one or more processors configured to detect a selection of at least one target object external to the first device, and initiate a channel of communication between the first device and a second device associated with the at least one target object external to the first device. The one or more processors may be configured to receive audio packets, from the second device, in response to the selection of at least one target object external to the device, decode the audio packets, received from the second device, to generate an audio signal. The one or more processors may be configured to output the audio signal based on the selection of the at least one target object external to the first device. The first device includes a memory, coupled to the one or more processors, configured to store the audio packets.