Abstract:
A computer based system and method for translation and communication of messages include sending a message in a source language using a transmission protocol from a first client device to a second client device. The source language message is transmitted using a transmission protocol to a translation server for translating the message from the source language to a destination language by either the first or second client device. The message, in its destination language form, is displayed and stored. The translation server is either on-line or off-line. Text-to-voice and voice-to-text converters are used for conversion of original text messages to voice, or of original voice to text, for transmission to the second client device.
Abstract:
Call audio of a call between a source user speaking a source language and a target user speaking a target language is received from a remote source user device of a source user via a communication network of a communication system, the call audio comprising speech of the source user in the source language. An automatic speech recognition procedure is performed on the call audio. A translation of the source user's speech is generated in the target language using the results of the speech recognition procedure. A translated synthetic speech audio version of the source user's speech is mixed with the source user's call audio and/or with translated audio of the target user's speech in the source language. The mixed audio signal is transmitted to a remote target user device of the target user via the communication network for outputting to at least the target user during the call.
Abstract:
Usage data associated with a user of a telephonic device is accessed by a remote learning engine. A service or a product that is likely to be of interest to the user is identified by the remote learning engine based on the accessed usage data. A recommended voice bundle application for the user is determined by the remote learning engine based on the accessed usage data, the recommended voice bundle application being a voice application that, when executed by the telephonic device, results in a simulated multi-step spoken conversation between the telephonic device and the user to enable the user to receive the identified service or the identified product. A recommendation associated with the recommended voice bundle application is transmitted from the remote learning engine to the telephonic device. The recommendation is presented by the telephonic device to the user through voice communications. The user through voice communications has accepted the recommendation determining is determined by the telephonic device. In response to determining that the user has accepted the recommendation, the recommended voice bundle application on the telephonic device is executed by the telephonic device.
Abstract:
A voice-enabled document system facilitates execution of service delivery operations by eliminating the need for manual or visual interaction during information retrieval by an operator. Access to voice-enabled documents (400) can facilitate operations for mobile vendors, on-site or field-service repairs, medical service providers, food service providers, and the like. Service providers can access the voice-enabled documents by using a client device to retrieve the document, display it on a screen, and, via voice commands initiate playback of selected audio files containing information derived from text data objects selected from the document. Data structures (402, 406, 408) that are components of a voice-enabled document include audio playback files (406) and a logical association (408) that links the audio playback files (406) to user-selectable fields, and to a set of voice commands.
Abstract:
Methods and systems are described in which spoken voice prompts can be produced in a manner such that they will most likely have the desired effect, for example to indicate empathy, or produce a desired follow-up action from a call recipient. The prompts can be produced with specific optimized speech parameters, including duration, gender of speaker, and pitch, so as to encourage participation and promote comprehension among a wide range of patients or listeners. Upon hearing such voice prompts, patients/listeners can know immediately when they are being asked questions that they are expected to answer, and when they are being given information, as well as the information that considered sensitive.
Abstract:
Methods, devices and systems for sharing content as part of a voice telephony session are provided. More specifically, content can be added to a voice communication session by selecting, dragging, and dropping a representation of that content onto a representation of the voice communication session. Where the selected content comprises an audio file, that content is played over the voice communication channel. Where the selected content comprises text, the text is converted to speech, and then played over the voice communication channel.
Abstract:
In a speech synthesis technique used in a network (110, 115), a set of text words is accepted by a speech engine software function (210) in a client device (105). From the set of text words, an invalid subset of text words is determined for which the text words are not in a word synthesis dictionary of the client device. The invalid subset of text words is transmitted over the network to a server device (120), which generates a set of word pronunciations including at least a portion of the text words of the invalid subset of text words and pronunciations associated with each of the text words. The client device uses the pronunciations for speech synthesis and may store them in a local word synthesis dictionary (220) stored in a memory (150) of the client device.
Abstract:
A wireless local area network system and a headset for the system. The headset uses voice-input information to set up parameters needed to connect the headset to the corresponding access point and then start the connection process. When the connection fails or succeeds appropriate voice prompt or visible signal tells the user the headset's connection status.