Abstract:
PROBLEM TO BE SOLVED: To provide a system and method for concurrent multimodal communication session persistence. SOLUTION: The method (Fig.6) and apparatus (600) maintains, during non-session conditions and on a per user basis, concurrent multimodal session status (EG STATES) information (604) of user agent programs configured for different concurrent modality communication during the same session (700), and re-establish a concurrent multimodal session in response to accessing the concurrent multimodal session status information (702). COPYRIGHT: (C)2010,JPO&INPIT
Abstract:
A method and apparatus maintains, during non-session conditions and on a per user basis, concurrent multimodal session status information of user agent programs configured for different concurrent modality communication during the same session, and re-establish a concurrent multimodal session in response to accessing the concurrent multimodal session status information.
Abstract:
A method and apparatus maintains, during non-session conditions and on a per user basis, concurrent multimodal session status information of user agent programs configured for different concurrent modality communication during the same session, and re-establish a concurrent multimodal session in response to accessing the concurrent multimodal session status information.
Abstract:
A multimodal network element (14) comprises a plurality of proxies (38a ... 38n) that each send a request for concurrent multimodal input information corresponding to multiple input modalities associated with a plurality user agent programs (30, 34) operating during a same session and a multimodal fusion engine (44). The multimodal fusion engine (44) is operatively responsive to received concurrent multimodal input information sent from the plurality of user agent programs (30, 34) sent in response to the request for concurrent different multimodal information and is operative to fuse the different multimodal input information sent from the plurality of user agent programs (30, 34) to provide concurrent multimodal communication from differing user agent programs during a same session.
Abstract:
A multimodal network element (14) comprises a plurality of proxies (38a ... 38n) that each send a request for concurrent multimodal input information corresponding to multiple input modalities associated with a plurality user agent programs (30, 34) operating during a same session and a multimodal fusion engine (44). The multimodal fusion engine (44) is operatively responsive to received concurrent multimodal input information sent from the plurality of user agent programs (30, 34) sent in response to the request for concurrent different multimodal information and is operative to fuse the different multimodal input information sent from the plurality of user agent programs (30, 34) to provide concurrent multimodal communication from differing user agent programs during a same session.
Abstract:
A method and apparatus maintains, during non-session conditions and on a per user basis, concurrent multimodal session status information of user agent programs configured for different concurrent modality communication during the same session, and re-establish a concurrent multimodal session in response to accessing the concurrent multimodal session status information.
Abstract:
A method (FIG. 6) and apparatus (600) maintains, during non-session conditions and on a per user basis, concurrent multimodal session status [EG STATES] information (604) of user agent programs configured for different concurrent modality communication during the same session (700), and re-establish a concurrent multimodal session in response to accessing the concurrent multimodal session status information (702).
Abstract:
A voice browser dialog enabler for multimodal dialog uses a multimodal markup document (22) with fields having markup-based forms associated with each field and defining fragments (45). A voice browser driver (43) resides on a communication device (10) and provides the fragments (45) and identifiers (48) that identify the fragments (45). A voice browser implementation (46) resides on a remote voice server (38) and receives the fragments (45) from the driver (43) and downloads a plurality of speech grammars. Input speech is matched against those speech grammars associated with the corresponding identifiers (48) received in a recognition request from the voice browser driver (43).
Abstract:
A multimodal network element (14) comprises a plurality of proxies (38a ... 38n) that each send a request for concurrent multimodal input information corresponding to multiple input modalities associated with a plurality user agent programs (30, 34) operating during a same session and a multimodal fusion engine (44). The multimodal fusion engine (44) is operatively responsive to received concurrent multimodal input information sent from the plurality of user agent programs (30, 34) sent in response to the request for concurrent different multimodal information and is operative to fuse the different multimodal input information sent from the plurality of user agent programs (30, 34) to provide concurrent multimodal communication from differing user agent programs during a same session.
Abstract:
A method and apparatus, during a session, analyze fetched modality specific instructions for at least one modality associated with a first user agent program to determine if the modality specific instructions include a concurrent multimodal tag (CMMT); and if detected, provide modality specific instructions for at least a second user agent program operating in a different modality, based on the concurrent multimodal tag. Synchronization of output from the first and second user agent programs is carried out based on the modality specific instructions.