Abstract:
A system and method for providing conversational computing via a protocol for automatic dialog management and arbitration between a plurality of conversational applications, and a framework for supporting such protocol, in a multi-modal and/or multi-channel environment. A DMAF (dialog manager and arbitrator facade) interfaces with one or more applications, and a hierarchical DMA architecture enables arbitration across the applications and within the same application between various sub-dialogs.
Abstract:
A system and method for providing automatic and coordinated sharing of conversational resources, e.g. functions and arguments, between network-connected servers and devices, and their corresponding applications. In one aspect, a system for providing automatic and coordinated sharing of conversational resources comprises: a network comprising a first (100), and second (106) network device; the first (100) and second (106) network device each comprising a set of conversational resources (102, 107), a dialog manager (103, 108), for managing a conversation and executing calls requesting a conversational service, and a communication stack (111, 115), for communicating messages over a network using conversational protocols, wherein the conversational protocols establish coordinated network communication between the dialog managers of the first and second device to automatically share the set of conversational resources of the first and second network device, when necessary, to perform their respective requested conversational service.
Abstract:
A conversational computing system that provides a universal coordinated multi-modal conversational user interface (CUI)(10) across a plurality of conversationally aware applications (11) (i.e., applications that "speak" conversational protocols) and conventional applications (12). The conversationally aware applications (11) communicate with a conversational kernel (14) via conversational application APIs (13). The conversational kernel (14) controls the dialog across applications and devices (local and networked) on the basis of their registered conversational capabilities and requirements and provides a unified conversational user interface and conversational services and behaviors. The conversational computing system may be built on top of a conventional operating system and APIs (15) and conventional device hardware (16). The conversational kernel (14) handles all I/O processing and controls conversational engines (18). The conversational kernel (14) converts voice requests into queries and converts outputs and results into spoken messages using conversational engines (18) and conversational arguments (17). The conversational application API (13) conveys all the information for the conversational kernel (14) to transform queries into application calls and conversely convert output into speech, appropriately sorted before being provided to the user.
Abstract:
A system and method for providing automatic and coordinated sharing of conversational resources, e.g. functions and arguments, between network- connected servers and devices, and their corresponding applications. In one aspect, a system for providing automatic and coordinated sharing of conversational resources comprises: a network comprising a first (100), and second (106) network device; the first (100) and second (106) network device each comprising a set of conversational resources (102, 107), a dialog manag er (103, 108), for managing a conversation and executing calls requesting a conversational service, and a communication stack (111, 115), for communicating messages over a network using conversational protocols, wherei n the conversational protocols establish coordinated network communication between the dialog managers of the first and second device to automatically share the set of conversational resources of the first and second network device, when necessary, to perform their respective requested conversational service.
Abstract:
A conversational computing system that provides a universal coordinated mult i- modal conversational user interface (CUI) (10) across a plurality of conversationally aware applications (11) (i.e., applications that "speak" conversational protocols) and conventional applications (12). The conversationally aware applications (11) communicate with a conversational kernel (14) via conversational application API's (13). The conversational kernel (14) controls the dialog across applications and devices (local and networked) on the basis of their registered conversational capabilities and requirements and provides a unified conversational user interface and conversational services and behaviors. The conversational computing system m ay be built on top of a conventional operating system and API's (15) and conventional device hardware (16). The conversational kernel (14) handles al l I/O processing and controls conversational engines (18). The conversational kernel (14) converts voice requests into queries and converts outputs and results into spoken messages using conversational engines (18) and conversational arguments (17). The conversational application API (13) conve ys all the information for the conversational kernel (14) to transform queries into application calls and conversely convert output into speech, appropriately sorted before being provided to the user.
Abstract:
A system and method for providing fast and efficient conversation navigation via a hierarchical structure (structure skeleton) which fully describes functions and services supported by a dialog (conversational) system. In one aspect, a conversational system and method is provided to pre-load dialog menus and target addresses to their associated dialog managing procedures in order to handle multiple or complex modes, contexts or applications. For instance, a content server (web site) (106) can download a skeleton or tree structure (109) describing the content (page)(107) or service provided by th e server (106) when the client (100) connects to the server (106). The skeleto n is hidden (not spoken) to the user but the user can advance to a page of interest, or to a particular dialog service, by uttering a voice command whi ch is recognized by the conversational system reacting appropriately (as per th e user's command) using the information contained within the skeleton. The skeleton (109) provides the necessary information to allow a user to quickly browse through multiple pages, dialog components, or NLU dialog forms to fin d information of interest without having to follow and listen to every possibl e page or form leading to a desired service or conversational transaction.
Abstract:
A method for conversational computing includes executing code embodying a conversational virtual machine, registering a plurality of input/output resources with a conversational kernel, providing an interface between a plurality of active applications and the conversational kernel processing input/output data, receiving input queries and input events of a multi-modal dialog across a plurality of user interface modalities of the plurality of active applications, generating output messages and output events of the multi-modal dialog in connection with the plurality of active applications, managing, by the conversational kernel, a context stack associated with the plurality of active applications and the multi-modal dialog to transform the input queries into application calls for the plurality of active applications and convert the output messages into speech, wherein the context stack accumulates a context of each of the plurality of active applications.
Abstract:
Un sistema (100) para almacenamiento inteligente y administración de redes, que comprende: información contextual que representa necesidades de un usuario; un sistema contextual que determina escenarios basándose en la información contextual, y determina servicios y dispositivos disponibles para el usuario, de acuerdo con la información contextual; un módulo de predicción (104) que recibe la información contextual, los escenarios, los servicios disponibles y los dispositivos disponibles, y predice las necesidades del usuario para crear recursos disponibles para el usuario, de acuerdo con las predicciones; información de sucesos y horas que representan un horario del usuario; y una base de datos (106) de localidades, que incluye información acerca de los dispositivos del lugar de destino y posibilidades de los dispositivos del lugar de destino; en el que el módulo de predicción recibe la información de sucesos y horas, y la información y posibilidades de los dispositivos del lugar de destino, para prede- cir la localidad del usuario y los recursos necesarios en la localidad, de tal manera que los recursos sean transferidos al usuario en una localidad, cuando y donde se necesiten los recursos.
Abstract:
This is a method provided for performing focus detection, ambiguity resoluti on and mood classification (815) in accordance with multi-modal input data, in varying operating conditions, in order to provide an effective conversationa l computing environment (418, 422) for one or more users (812).