SUPPRESSING OR REDUCING EFFECTS OF WIND TURBULENCE
    1.
    发明申请
    SUPPRESSING OR REDUCING EFFECTS OF WIND TURBULENCE 审中-公开
    抑制或减少风力湍流的影响

    公开(公告)号:WO2017209838A1

    公开(公告)日:2017-12-07

    申请号:PCT/US2017/026526

    申请日:2017-04-07

    Abstract: A method of operation of a device includes receiving an input signal at the device. The input signal is generated using at least one microphone. The input signal includes a first signal component having a first amount of wind turbulence noise and a second signal component having a second amount of wind turbulence noise that is greater than the first amount of wind turbulence noise. The method further includes generating, based on the input signal, an output signal at the device. The output signal includes the first signal component and a third signal component that replaces the second signal component. A first frequency response of the input signal corresponds to a second frequency response of the output signal.

    Abstract translation: 设备的操作方法包括在设备处接收输入信号。 输入信号是使用至少一个麦克风生成的。 输入信号包括具有第一量的风湍流噪声的第一信号分量和具有大于第一量的风湍流噪声的第二量的风湍流噪声的第二信号分量。 该方法还包括基于输入信号在设备处生成输出信号。 输出信号包括第一信号分量和替换第二信号分量的第三信号分量。 输入信号的第一频率响应对应于输出信号的第二频率响应。

    COLLABORATIVE AUDIO PROCESSING
    2.
    发明申请
    COLLABORATIVE AUDIO PROCESSING 审中-公开
    协同音频处理

    公开(公告)号:WO2017048375A1

    公开(公告)日:2017-03-23

    申请号:PCT/US2016/044558

    申请日:2016-07-28

    Abstract: A method of performing noise reduction includes capturing a first audio signal at a first microphone of a first device. The method also includes receiving, at the first device, audio data representative of a second audio signal from a second device. The second audio signal is captured by a second microphone of the second device. The method further includes performing noise reduction on the first audio signal based at least in part on the audio data representative of the second audio signal.

    Abstract translation: 执行降噪的方法包括在第一设备的第一麦克风处捕获第一音频信号。 该方法还包括在第一设备处接收表示来自第二设备的第二音频信号的音频数据。 第二音频信号由第二设备的第二麦克风捕获。 该方法还包括至少部分地基于表示第二音频信号的音频数据对第一音频信号执行噪声降低。

    VIRTUAL, AUGMENTED, AND MIXED REALITY
    3.
    发明申请
    VIRTUAL, AUGMENTED, AND MIXED REALITY 审中-公开
    虚拟,增强和混合的现实

    公开(公告)号:WO2018013237A1

    公开(公告)日:2018-01-18

    申请号:PCT/US2017/034522

    申请日:2017-05-25

    Abstract: A method for outputting virtual sound includes detecting an audio signal in an environment at one or more microphones. The method also includes determining, at a processor, a location of a sound source of the audio signal and estimating one or more acoustical characteristics of the environment based on the audio signal. The method further includes inserting a virtual sound into the environment based on the one or more acoustical characteristics. The virtual sound has one or more audio properties of a sound generated from the location of the sound source.

    Abstract translation: 用于输出虚拟声音的方法包括检测一个或多个麦克风处的环境中的音频信号。 该方法还包括在处理器处确定音频信号的声源的位置并基于音频信号估计环境的一个或多个声学特性。 该方法还包括基于一个或多个声学特性将虚拟声音插入到环境中。 虚拟声音具有从声源位置产生的声音的一个或多个音频属性。

    DEVICE FOR GENERATING AUDIO OUTPUT
    6.
    发明申请
    DEVICE FOR GENERATING AUDIO OUTPUT 审中-公开
    用于产生音频输出的设备

    公开(公告)号:WO2017200646A1

    公开(公告)日:2017-11-23

    申请号:PCT/US2017/025051

    申请日:2017-03-30

    Abstract: A headset device (100) includes a first earpiece (108) configured to receive a reference sound and to generate a first reference audio signal (111) based on the reference sound. The headset device (100) further includes a second earpiece (118) configured to receive the reference sound and to generate a second reference audio signal (121) based on the reference sound. The headset device (100) further includes a controller (102) coupled to the first earpiece (108) and to the second earpiece (118). The controller (102) is configured to generate a first signal (107) and a second signal (117) based on a phase relationship between the first reference audio signal (111) and the second reference audio signal (121). The controller (102) is further configured to output the first signal (107) to the first earpiece (108) and output the second signal (117) to the second earpiece (118).

    Abstract translation: 耳机设备(100)包括第一耳机(108),其被配置为接收参考声音并且基于参考声音生成第一参考音频信号(111)。 头戴式设备(100)还包括第二听筒(118),其被配置为接收参考声音并且基于参考声音生成第二参考音频信号(121)。 头戴式设备(100)还包括耦合到第一听筒(108)和第二听筒(118)的控制器(102)。 控制器(102)被配置为基于第一参考音频信号(111)和第二参考音频信号(121)之间的相位关系生成第一信号(107)和第二信号(117)。 控制器(102)还被配置为将第一信号(107)输出到第一听筒(108)并将第二信号(117)输出到第二听筒(118)。

    CLOUD-BASED PROCESSING USING LOCAL DEVICE PROVIDED SENSOR DATA AND LABELS
    7.
    发明申请
    CLOUD-BASED PROCESSING USING LOCAL DEVICE PROVIDED SENSOR DATA AND LABELS 审中-公开
    基于云的处理使用本地设备提供传感器数据和标签

    公开(公告)号:WO2017160453A1

    公开(公告)日:2017-09-21

    申请号:PCT/US2017/017991

    申请日:2017-02-15

    CPC classification number: G06N3/08 G06N3/04 G06N3/0454

    Abstract: A method of training a device specific cloud-based audio processor includes receiving sensor data captured from multiple sensors at a local device. The method also includes receiving spatial information labels computed on the local device using local configuration information. The spatial information labels are associated with the captured sensor data. Lower layers of a first neural network are trained based on the spatial information labels and sensor data. The trained lower layers are incorporated into a second, larger neural network for audio classification. The second, larger neural network may be retrained using the trained lower layers of the first neural network.

    Abstract translation: 训练设备特定的基于云的音频处理器的方法包括接收从本地设备处的多个传感器捕获的传感器数据。 该方法还包括使用本地配置信息接收在本地设备上计算的空间信息标签。 空间信息标签与捕获的传感器数据相关联。 基于空间信息标签和传感器数据来训练第一神经网络的较低层。 经训练的较低层被并入用于音频分类的第二个较大的神经网络。 第二个较大的神经网络可以使用第一个神经网络的训练后的下层重新训练。

    METHOD, SYSTEM AND ARTICLE OF MANUFACTURE FOR PROCESSING SPATIAL AUDIO
    8.
    发明申请
    METHOD, SYSTEM AND ARTICLE OF MANUFACTURE FOR PROCESSING SPATIAL AUDIO 审中-公开
    用于处理空间音频的制造方法,系统和制品

    公开(公告)号:WO2016109065A1

    公开(公告)日:2016-07-07

    申请号:PCT/US2015/062642

    申请日:2015-11-25

    Abstract: Techniques for processing directionally-encoded audio to account for spatial characteristics of a listener playback environment are disclosed. The directionally- encoded audio data includes spatial information indicative of one or more directions of sound sources in an audio scene. The audio data is modified based on input data identifying the spatial characteristics of the playback environment. The spatial characteristics may correspond to actual loudspeaker locations in the playback environment. The directionally-encoded audio may also be processed to permit focusing/defocusing on sound sources or particular directions in an audio scene. The disclosed techniques may allow a recorded audio scene to be more accurately reproduced at playback time, regardless of the output loudspeaker setup. Another advantage is that a user may dynamically configure audio data so that it better conforms to the user's particular loudspeaker layouts and/or the user's desired focus on particular subjects or areas in an audio scene.

    Abstract translation: 公开了用于处理定向编码的音频以考虑收听者回放环境的空间特征的技术。 定向编码的音频数据包括指示音频场景中的声源的一个或多个方向的空间信息。 基于识别回放环境的空间特性的输入数据来修改音频数据。 空间特征可以对应于播放环境中的实际扬声器位置。 定向编码的音频也可以被处理以允许对音频场景中的声源或特定方向进行聚焦/散焦。 所公开的技术可以允许在播放时间更准确地再现记录的音频场景,而与输出的扬声器设置无关。 另一个优点是,用户可以动态地配置音频数据,使得其更好地符合用户的特定扬声器布局和/或用户对音频场景中特定主体或区域的期望焦点。

    COLLABORATIVE AUDIO PROCESSING
    10.
    发明申请
    COLLABORATIVE AUDIO PROCESSING 审中-公开
    协同音频处理

    公开(公告)号:WO2017048376A1

    公开(公告)日:2017-03-23

    申请号:PCT/US2016/044563

    申请日:2016-07-28

    Abstract: A method of generating audio output includes displaying a graphical user interface (GUI) 800 at a user device 810. The GUI represents an area having multiple regions 801-809 and multiple audio capture devices 810, 820, 830 are located in the area. The method also includes receiving audio data from the multiple audio capture devices. The method further includes receiving an input indicating a selected region of the multiple regions. The method also includes generating, at the user device, audio output based on audio data from a subset of the multiple audio capture devices. Each audio capture device in the subset is located in the selected region.

    Abstract translation: 产生音频输出的方法包括在用户设备810处显示图形用户界面(GUI)800.GUI表示具有多个区域801-809的区域,并且多个音频捕获设备810,820,830位于该区域中。 该方法还包括从多个音频捕获设备接收音频数据。 该方法还包括接收指示多个区域的选定区域的输入。 该方法还包括在用户设备处基于来自多个音频捕获设备的子集的音频数据生成音频输出。 子集中的每个音频捕获设备位于所选择的区域中。

Patent Agency Ranking