DEEP LEARNING DRIVEN MULTI-CHANNEL FILTERING FOR SPEECH ENHANCEMENT

    公开(公告)号:US20190172476A1

    公开(公告)日:2019-06-06

    申请号:US15830955

    申请日:2017-12-04

    Applicant: Apple Inc.

    Abstract: A number of features are extracted from a current frame of a multi-channel speech pickup and from side information that is a linear echo estimate, a diffuse signal component, or a noise estimate of the multi-channel speech pickup. A DNN-based speech presence probability is produced for the current frame, where the SPP value is produced in response to the extracted features being input to the DNN. The DNN-based SPP value is applied to configure a multi-channel filter whose input is the multi-channel speech pickup and whose output is a single audio signal. In one aspect, the system is designed to run online, at low enough latency for real time applications such voice trigger detection. Other aspects are also described and claimed.

    Extracting Ambience From A Stereo Input
    22.
    发明公开

    公开(公告)号:US20240314509A1

    公开(公告)日:2024-09-19

    申请号:US18605701

    申请日:2024-03-14

    Applicant: Apple Inc.

    CPC classification number: H04S7/30 H04S1/007 H04S2420/11

    Abstract: A sound scene is represented as first order Ambisonics (FOA) audio. A processor formats each signal of the FOA audio to a stream of audio frames, provides the formatted FOA audio to a machine learning model that reformats the formatted FOA audio in a target or desired higher order Ambisonics (HOA) format, and obtains output audio of the sound scene in the desired HOA format from the machine learning model. The output audio in the desired HOA format may then be rendered according to a playback audio format of choice. Other aspects are also described and claimed.

    End-to-end time-domain multitask learning for ML-based speech enhancement

    公开(公告)号:US11996114B2

    公开(公告)日:2024-05-28

    申请号:US17321411

    申请日:2021-05-15

    Applicant: Apple Inc.

    CPC classification number: G10L21/0216 G06N20/00 G10L15/16 G10L2021/02166

    Abstract: Disclosed is a multi-task machine learning model such as a time-domain deep neural network (DNN) that jointly generate an enhanced target speech signal and target audio parameters from a mixed signal of target speech and interference signal. The DNN may encode the mixed signal, determine masks used to jointly estimate the target signal and the target audio parameters based on the encoded mixed signal, apply the mask to separate the target speech from the interference signal to jointly estimate the target signal and the target audio parameters, and decode the masked features to enhance the target speech signal and to estimate the target audio parameters. The target audio parameters may include a voice activity detection (VAD) flag of the target speech. The DNN may leverage multi-channel audio signal and multi-modal signals such as video signals of the target speaker to improve the robustness of the enhanced target speech signal.

Patent Agency Ranking