SPEAKER SPECIFIC SPEECH ENHANCEMENT

    公开(公告)号:US20220084509A1

    公开(公告)日:2022-03-17

    申请号:US17475226

    申请日:2021-09-14

    Abstract: Embodiments described herein provide for a machine-learning architecture system that enhances the speech audio of a user-defined target speaker by suppressing interfering speakers, as well as background noise and reverberations. The machine-learning architecture includes a speech separation engine for separating the speech signal of a target speaker from a mixture of multiple speakers' speech, and a noise suppression engine for suppressing various types of noise in the input audio signal. The speaker-specific speech enhancement architecture performs speaker mixture separation and background noise suppression to enhance the perceptual quality of the speech audio. The output of the machine-learning architecture is an enhanced audio signal improving the voice quality of a target speaker on a single-channel audio input containing a mixture of speaker speech signals and various types of noise.

    CROSS-CHANNEL ENROLLMENT AND AUTHENTICATION OF VOICE BIOMETRICS

    公开(公告)号:US20210241776A1

    公开(公告)日:2021-08-05

    申请号:US17165180

    申请日:2021-02-02

    Abstract: Embodiments described herein provide for systems and methods for voice-based cross-channel enrollment and authentication. The systems control for and mitigate against variations in audio signals received across any number of communications channels by training and employing a neural network architecture comprising a speaker verification neural network and a bandwidth expansion neural network. The bandwidth expansion neural network is trained on narrowband audio signals to produce and generate estimated wideband audio signals corresponding to the narrowband audio signals. These estimated wideband audio signals may be fed into one or more downstream applications, such as the speaker verification neural network or embedding extraction neural network. The speaker verification neural network can then compare and score inbound embeddings for a current call against enrolled embeddings, regardless of the channel used to receive the inbound signal or enrollment signal.

    SYSTEM AND METHOD FOR CLUSTER-BASED AUDIO EVENT DETECTION

    公开(公告)号:US20210134316A1

    公开(公告)日:2021-05-06

    申请号:US17121291

    申请日:2020-12-14

    Abstract: Methods, systems, and apparatuses for audio event detection, where the determination of a type of sound data is made at the cluster level rather than at the frame level. The techniques provided are thus more robust to the local behavior of features of an audio signal or audio recording. The audio event detection is performed by using Gaussian mixture models (GMMs) to classify each cluster or by extracting an i-vector from each cluster. Each cluster may be classified based on an i-vector classification using a support vector machine or probabilistic linear discriminant analysis. The audio event detection significantly reduces potential smoothing error and avoids any dependency on accurate window-size tuning. Segmentation may be performed using a generalized likelihood ratio and a Bayesian information criterion, and the segments may be clustered using hierarchical agglomerative clustering. Audio frames may be clustered using K-means and GMMs.

    CHANNEL-COMPENSATED LOW-LEVEL FEATURES FOR SPEAKER RECOGNITION

    公开(公告)号:US20190333521A1

    公开(公告)日:2019-10-31

    申请号:US16505452

    申请日:2019-07-08

    Abstract: A system for generating channel-compensated features of a speech signal includes a channel noise simulator that degrades the speech signal, a feed forward convolutional neural network (CNN) that generates channel-compensated features of the degraded speech signal, and a loss function that computes a difference between the channel-compensated features and handcrafted features for the same raw speech signal. Each loss result may be used to update connection weights of the CNN until a predetermined threshold loss is satisfied, and the CNN may be used as a front-end for a deep neural network (DNN) for speaker recognition/verification. The DNN may include convolutional layers, a bottleneck features layer, multiple fully-connected layers and an output layer. The bottleneck features may be used to update connection weights of the convolutional layers, and dropout may be applied to the convolutional layers.

    Channel-Compensated Low-Level Features For Speaker Recognition

    公开(公告)号:US20180082692A1

    公开(公告)日:2018-03-22

    申请号:US15709024

    申请日:2017-09-19

    CPC classification number: G10L17/20 G10L17/02 G10L17/04 G10L17/18 G10L19/028

    Abstract: A system for generating channel-compensated features of a speech signal includes a channel noise simulator that degrades the speech signal, a feed forward convolutional neural network (CNN) that generates channel-compensated features of the degraded speech signal, and a loss function that computes a difference between the channel-compensated features and handcrafted features for the same raw speech signal. Each loss result may be used to update connection weights of the CNN until a predetermined threshold loss is satisfied, and the CNN may be used as a front-end for a deep neural network (DNN) for speaker recognition/verification. The DNN may include convolutional layers, a bottleneck features layer, multiple fully-connected layers and an output layer. The bottleneck features may be used to update connection weights of the convolutional layers, and dropout may be applied to the convolutional layers.

    AUDIOVISUAL DEEPFAKE DETECTION
    20.
    发明申请

    公开(公告)号:US20220121868A1

    公开(公告)日:2022-04-21

    申请号:US17503152

    申请日:2021-10-15

    Abstract: The embodiments execute machine-learning architectures for biometric-based identity recognition (e.g., speaker recognition, facial recognition) and deepfake detection (e.g., speaker deepfake detection, facial deepfake detection). The machine-learning architecture includes layers defining multiple scoring components, including sub-architectures for speaker deepfake detection, speaker recognition, facial deepfake detection, facial recognition, and lip-sync estimation engine. The machine-learning architecture extracts and analyzes various types of low-level features from both audio data and visual data, combines the various scores, and uses the scores to determine the likelihood that the audiovisual data contains deepfake content and the likelihood that a claimed identity of a person in the video matches to the identity of an expected or enrolled person. This enables the machine-learning architecture to perform identity recognition and verification, and deepfake detection, in an integrated fashion, for both audio data and visual data.

Patent Agency Ranking