Abstract:
A voice browser dialog enabler for multimodal dialog uses a multimodal markup document (22) with fields having markup-based forms associated with each field and defining fragments (45). A voice browser driver (43) resides on a communication device (10) and provides the fragments (45) and identifiers (48) that identify the fragments (45). A voice browser implementation (46) resides on a remote voice server (38) and receives the fragments (45) from the driver (43) and downloads a plurality of speech grammars. Input speech is matched against those speech grammars associated with the corresponding identifiers (48) received in a recognition request from the voice browser driver (43).
Abstract:
A method and apparatus is disclosed whereby the context of user activity can be used to tailor the ambient information system. The method and apparatus use both short-term context such as recent activity and long-term context such as historical patterns to highlight specific content on channels or widgets that are likely to be of most immediate interest to the user. This contextual information provided by the framework can also be used to make intelligent decisions about how to tailor the user experience after a user has interacted with the item in question. Additionally, context information accumulated on one device such as a mobile phone can be broadcasted to other devices to influence the ambient information display application on a second device such as a desktop based on enabling remote access to the local context repository.