-
11.
公开(公告)号:US20240233709A1
公开(公告)日:2024-07-11
申请号:US18585366
申请日:2024-02-23
Applicant: PINDROP SECURITY, INC.
Inventor: Kedar PHATAK , Elie KHOURY
CPC classification number: G10L15/063 , G06N3/045 , G06N20/00 , G10L15/16 , G10L25/27
Abstract: Embodiments described herein provide for audio processing operations that evaluate characteristics of audio signals that are independent of the speaker's voice. A neural network architecture trains and applies discriminatory neural networks tasked with modeling and classifying speaker-independent characteristics. The task-specific models generate or extract feature vectors from input audio data based on the trained embedding extraction models. The embeddings from the task-specific models are concatenated to form a deep-phoneprint vector for the input audio signal. The DP vector is a low dimensional representation of the each of the speaker-independent characteristics of the audio signal and applied in various downstream operations.
-
公开(公告)号:US20220084509A1
公开(公告)日:2022-03-17
申请号:US17475226
申请日:2021-09-14
Applicant: PINDROP SECURITY, INC.
Inventor: Ganesh SIVARAMAN , Avrosh KUMAR , Elie KHOURY
Abstract: Embodiments described herein provide for a machine-learning architecture system that enhances the speech audio of a user-defined target speaker by suppressing interfering speakers, as well as background noise and reverberations. The machine-learning architecture includes a speech separation engine for separating the speech signal of a target speaker from a mixture of multiple speakers' speech, and a noise suppression engine for suppressing various types of noise in the input audio signal. The speaker-specific speech enhancement architecture performs speaker mixture separation and background noise suppression to enhance the perceptual quality of the speech audio. The output of the machine-learning architecture is an enhanced audio signal improving the voice quality of a target speaker on a single-channel audio input containing a mixture of speaker speech signals and various types of noise.
-
公开(公告)号:US20210241776A1
公开(公告)日:2021-08-05
申请号:US17165180
申请日:2021-02-02
Applicant: PINDROP SECURITY, INC.
Inventor: Ganesh SIVARAMAN , Elie KHOURY , Avrosh KUMAR
Abstract: Embodiments described herein provide for systems and methods for voice-based cross-channel enrollment and authentication. The systems control for and mitigate against variations in audio signals received across any number of communications channels by training and employing a neural network architecture comprising a speaker verification neural network and a bandwidth expansion neural network. The bandwidth expansion neural network is trained on narrowband audio signals to produce and generate estimated wideband audio signals corresponding to the narrowband audio signals. These estimated wideband audio signals may be fed into one or more downstream applications, such as the speaker verification neural network or embedding extraction neural network. The speaker verification neural network can then compare and score inbound embeddings for a current call against enrolled embeddings, regardless of the channel used to receive the inbound signal or enrollment signal.
-
公开(公告)号:US20210134316A1
公开(公告)日:2021-05-06
申请号:US17121291
申请日:2020-12-14
Applicant: PINDROP SECURITY, INC.
Inventor: Elie KHOURY , Matthew GARLAND
Abstract: Methods, systems, and apparatuses for audio event detection, where the determination of a type of sound data is made at the cluster level rather than at the frame level. The techniques provided are thus more robust to the local behavior of features of an audio signal or audio recording. The audio event detection is performed by using Gaussian mixture models (GMMs) to classify each cluster or by extracting an i-vector from each cluster. Each cluster may be classified based on an i-vector classification using a support vector machine or probabilistic linear discriminant analysis. The audio event detection significantly reduces potential smoothing error and avoids any dependency on accurate window-size tuning. Segmentation may be performed using a generalized likelihood ratio and a Bayesian information criterion, and the segments may be clustered using hierarchical agglomerative clustering. Audio frames may be clustered using K-means and GMMs.
-
公开(公告)号:US20190333521A1
公开(公告)日:2019-10-31
申请号:US16505452
申请日:2019-07-08
Applicant: PINDROP SECURITY, INC.
Inventor: Elie KHOURY , Matthew GARLAND
IPC: G10L17/20 , G10L17/18 , G10L19/028 , G10L17/02 , G10L17/04
Abstract: A system for generating channel-compensated features of a speech signal includes a channel noise simulator that degrades the speech signal, a feed forward convolutional neural network (CNN) that generates channel-compensated features of the degraded speech signal, and a loss function that computes a difference between the channel-compensated features and handcrafted features for the same raw speech signal. Each loss result may be used to update connection weights of the CNN until a predetermined threshold loss is satisfied, and the CNN may be used as a front-end for a deep neural network (DNN) for speaker recognition/verification. The DNN may include convolutional layers, a bottleneck features layer, multiple fully-connected layers and an output layer. The bottleneck features may be used to update connection weights of the convolutional layers, and dropout may be applied to the convolutional layers.
-
公开(公告)号:US20180082692A1
公开(公告)日:2018-03-22
申请号:US15709024
申请日:2017-09-19
Applicant: PINDROP SECURITY, INC.
Inventor: Elie KHOURY , Matthew GARLAND
IPC: G10L17/20 , G10L17/18 , G10L17/04 , G10L17/02 , G10L19/028
CPC classification number: G10L17/20 , G10L17/02 , G10L17/04 , G10L17/18 , G10L19/028
Abstract: A system for generating channel-compensated features of a speech signal includes a channel noise simulator that degrades the speech signal, a feed forward convolutional neural network (CNN) that generates channel-compensated features of the degraded speech signal, and a loss function that computes a difference between the channel-compensated features and handcrafted features for the same raw speech signal. Each loss result may be used to update connection weights of the CNN until a predetermined threshold loss is satisfied, and the CNN may be used as a front-end for a deep neural network (DNN) for speaker recognition/verification. The DNN may include convolutional layers, a bottleneck features layer, multiple fully-connected layers and an output layer. The bottleneck features may be used to update connection weights of the convolutional layers, and dropout may be applied to the convolutional layers.
-
公开(公告)号:US20180082691A1
公开(公告)日:2018-03-22
申请号:US15709232
申请日:2017-09-19
Applicant: PINDROP SECURITY, INC.
Inventor: Elie KHOURY , Matthew GARLAND
Abstract: In a speaker recognition apparatus, audio features are extracted from a received recognition speech signal, and first order Gaussian mixture model (GMM) statistics are generated therefrom based on a universal background model that includes a plurality of speaker models. The first order GMM statistics are normalized with regard to a duration of the received speech signal. The deep neural network reduces a dimensionality of the normalized first order GMM statistics, and outputs a voiceprint corresponding to the recognition speech signal.
-
公开(公告)号:US20240363124A1
公开(公告)日:2024-10-31
申请号:US18646431
申请日:2024-04-25
Applicant: Pindrop Security, Inc.
Inventor: Elie KHOURY , Ganesh SIVARAMAN , Tianxiang CHEN , Nikolay GAUBITCH , David LOONEY , Amit GUPTA , Vijay BALASUBRAMANIYAN , Nicholas KLEIN , Anthony STANKUS
Abstract: Disclosed are systems and methods including software processes executed by a server that detect audio-based synthetic speech (“deepfakes”) in a call conversation. Embodiments include systems and methods for detecting fraudulent presentation attacks using multiple functional engines that implement various fraud-detection techniques, to produce calibrated scores and/or fused scores. A computer may, for example, evaluate the audio quality of speech signals within audio signals, where speech signals contain the speech portions having speaker utterances.
-
公开(公告)号:US20240355336A1
公开(公告)日:2024-10-24
申请号:US18439049
申请日:2024-02-12
Applicant: PINDROP SECURITY, INC.
Inventor: Umair ALTAF , Sai Pradeep PERI , Lakshay PHATELA , Payas GUPTA , Yitao SUN , Svetlana AFANASEVA , Kailash PATIL , Elie KHOURY , Bradley MAGNETTA , Vijay BALASUBRAMANIYAN , Tianxiang CHEN
Abstract: Disclosed are systems and methods including software processes executed by a server that detect audio-based synthetic speech (“deepfakes”) in a call conversation. The server applies an NLP engine to transcribe call audio and analyze the text for anomalous patterns to detect synthetic speech. Additionally or alternatively, the server executes a voice “liveness” detection system for detecting machine speech, such as synthetic speech or replayed speech. The system performs phrase repetition detection, background change detection, and passive voice liveness detection in call audio signals to detect liveness of a speech utterance. An automated model update module allows the liveness detection model to adapt to new types of presentation attacks, based on the human provided feedback.
-
公开(公告)号:US20220121868A1
公开(公告)日:2022-04-21
申请号:US17503152
申请日:2021-10-15
Applicant: Pindrop Security, Inc.
Inventor: Tianxiang CHEN , Elie KHOURY
Abstract: The embodiments execute machine-learning architectures for biometric-based identity recognition (e.g., speaker recognition, facial recognition) and deepfake detection (e.g., speaker deepfake detection, facial deepfake detection). The machine-learning architecture includes layers defining multiple scoring components, including sub-architectures for speaker deepfake detection, speaker recognition, facial deepfake detection, facial recognition, and lip-sync estimation engine. The machine-learning architecture extracts and analyzes various types of low-level features from both audio data and visual data, combines the various scores, and uses the scores to determine the likelihood that the audiovisual data contains deepfake content and the likelihood that a claimed identity of a person in the video matches to the identity of an expected or enrolled person. This enables the machine-learning architecture to perform identity recognition and verification, and deepfake detection, in an integrated fashion, for both audio data and visual data.
-
-
-
-
-
-
-
-
-