Abstract:
A technique to improve the recognition accuracy when transcribing speech data that contains data from a wide range of environments. Input data in many situations contains data from a variety of sources in different environments. Such classes include: clean speech, speech corrupted by noise (e.g., music), non-speech (e.g., pure music with no speech), telephone speech, and the identity of a speaker. A technique is described whereby the different classes of data are first automatically identified, and then each class is transcribed by a system that is made specifically for it. The invention also describes a segmentation algorithm that is based on making up an acoustic model that characterizes the data in each class, and then using a dynamic programming algorithm (the viterbi algorithm) to automatically identify segments that belong to each class. The acoustic models are made in a certain feature space, and the invention also describes different feature spaces for use with different classes.
Abstract:
In a speech recognition system, a technique is disclosed for segmenting multiple utterances of a vocabulary word in a consistent manner, to determine a Markov model representation for each segment. Plural utterances of a word are converted to label strings. One is selected as prototype, and represented by a sequence of Markov models. All other strings are aligned against the prototype, using stored probabilities, thereby determining substrings and thus segments which correspond to the labels of the prototype sequence. Corresponding segments of all strings are commonly evaluated to determine finally a suitable Markov model representation for each segment. The concatenation represents the baseform for the word.
Abstract:
The method of performing an acoustic match between phones and a string of labels produced by an acoustic processor in response to a spheech input involves forming simplified phone machines. This includes the operation of replacing by a specific value the actual label probabilities for a given label at all transitions at which the given label may be generated in a partic. phone machine. The probability of a phone generating the labels in the string is determined based on the simplified phone machine corresp. to it.
Abstract:
In a word, or speech, recognition system for decoding a vocabulary word from outputs selected from an alphabet of outputs in response to a communicated word input wherein each word in the vocabulary is represented by a baseform of at least one probabilistic finite state model and wherein each probabilistic model has transition probability items and output probability items and wherein a value is stored for each of at least some probability items, the present invention relates to apparatus and method for determining probability values for probability items by biassing at least some of the stored values to enhance the likelihood that outputs generated in response to communication of a known word input are produced by the baseform for the known word relative to the respective likelihood of the generated outputs being produced by the baseform for at least one other word. Specifically, the current values of counts - from which probability items are derived - are adjusted by uttering a known word and determining how often probability events occur relative to (a) the model corresponding to the known uttered "correct" word and (b) the model of at least one other "incorrect" word. The current count values are increased based on the event occurrences relating to the correct word and are reduced based on the event occurrences relating to the incorrect word or words.
Abstract:
The speech recognition system has a circuit for generating an alphabet of standard labels in response to a first speech input. Each standard label represents a sound type corresponding to a given interval of time. A circuit produces a respective sequence of standard labels from the alphabet in response to the uttering of each word from a vocabulary of words. A circuit selects a set of personalized label representing a sound type corresponding to an interval of time. A circuit forms a respective probalistic model for each standard label. Each model includes a number of states and at least one transition extending from a state to a state. It also includes a transition probability for each transition and, for at least one transition, a number of output probabilities. Each output probability at a given transition in the model of a given standard label represents the likelihood of a respective personalized label being poduced at the given transition.
Abstract:
A continuous speech recognition system includes an automatic phonological rules generator which determines variations in the pronunciation of phonemes based on the context in which they occur. This phonological rules generator associates sequences of labels derived from vocalizations of a training text with respective phonemes inferred from the training text. These sequences are then annotated with their phoneme context from the training text and clustered into groups representing similar pronunciations of each phoneme. A decision tree is generated using the context information of the sequences to predict the clusters to which the sequences belong. The training data is processed by the decision tree to divide the sequences into leaf-groups representing similar pronunciations of each phoneme. The sequences in each leaf-group are clustered into sub-groups representing respectively different pronunciations of their corresponding phoneme in a give context. A Markov model is generated for each sub-group. The various Markov models of a leaf-group are combined into a single compound model by assigning common initial and final states to each model. The compound Markov models are used by a speech recognition system to analyze an unknown sequence of labels given its context.
Abstract:
In order to determine a next event based upon available data, a binary decision tree is constructed having true or false questions at each node and a probability distribution of the unknown next event based upon available data at each leaf. Starting at the root of the tree, the construction process proceeds from node-to-node towards a leaf by answering the question at each node encountered and following either the true or false path depending upon the answer. The questions are phrased in terms of the available data and are designed to provide as much information as possible about the next unknown event. The process is particularly useful in speech recognition when the next word to be spoken is determined on the basis of the previously spoken words.
Abstract:
An apparatus is disclosed for compressing a p x q image array of two-valued (black/white) sample points. The image array points are serially applied to the apparatus in consecutive raster scan lines. In response, the apparatus simultaneously forms two matrices respectively representing a high order p x q predictive error array and a p x q array of location events (such as the raster leading edges of all objects in the image). Improved compression is achieved by selecting between the more compression efficient of two methods for encoding the position of errors in the prediction error array. These alternative methods are conventional run-length coding and a novel form of reference encoding, used selectively but to significant advantage. Thus, a run-length compression codeword is formed from the count C of non-errors between consecutive errors (in response to the occurrence of each error in the jth bit position of the ith scan line of the predictive error array) upon either C T and there being no occurrence of a line difference encoding for the error (where i, j, C and T have positive integers). A line difference codeword with difference value v is generated upon the joint event of C>T and either the single or multiple occurrence of location events in the ith-1 scan line of the location event array within the bit position range of B