Abstract:
Embodiments provide a method and system of text independent speaker recognition with a complexity comparable to a text dependent version. The scheme exploits the fact that speech is a quasi-stationary signal and simplifies the recognition process based on this theory. The modeling allows the speaker profile to be updated progressively with the new speech sample that is acquired during usage time.
Abstract:
Methods and systems of text independent speaker recognition provide a complexity comparable to text dependent speaker recognition system. These methods and systems exploit the fact that speech is a quasi-stationary signal and simplify the recognition process based on this theory. The speaker modeling allows a speaker profile to be updated progressively with new speech samples that are acquired during usage over time by the speaker.
Abstract:
Embodiments reduce the complexity of speaker dependent speech recognition systems and methods by representing the code phrase (i.e., the word or words to be recognized) using a single Gaussian Mixture Model (GMM) which is adapted from a Universal Background Model (UBM). Only the parameters of the GMM need to be stored. Further reduction in computation is achieved by only checking the GMM component that is relevant to the keyword template. In this scheme, keyword template is represented by a sequence of the index of best performing component of the GMM of the keyword model. Only one template is saved by combining the registration template using Longest Common Sequence algorithm. The quality of the word model is continuously updated by performing expectation maximization iteration using the test word which is accepted as keyword model.
Abstract:
Embodiments reduce the complexity of speaker dependent speech recognition systems and methods by representing the code word (i.e., the word to be recognized) using a single Gaussian Mixture Model (GMM) which is adapted from a Universal Background Model (UBM). Only the parameters of the GMM need to be stored. Further reduction in computation is achieved by only checking the GMM component that is relevant to the keyword template. In this scheme, keyword template is represented by a sequence of the index of best performing component of the GMM of the keyword model. Only one template is saved by combining the registration template using Longest Common Sequence algorithm. The quality of the word model is continuously updated by performing expectation maximization iteration using the test word which is accepted as keyword model.