Abstract:
A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user.
Abstract:
A virtual personal assistant (VPA) application analyzes intents to, among other things, enhance or personalize a user's dialog experience with the VPA application. A set of intents, or multiple sets of intents, are maintained over the course of one or more user-specific dialog sessions with the VPA. Inferences may be derived from the set or sets of intents and incorporated into a current or future dialog session between the VPA and a user of the VPA application. In some embodiments, the inferences are only made available through the systemic understanding of natural language discourse by the VPA.
Abstract:
A platform for developing a virtual personal assistant (“VPA”) application includes an ontology that defines a computerized structure for representing knowledge relating to one or more domains. A domain may refer to a category of information and/or activities in relation to which the VPA application may engage in a conversational natural language dialog with a computing device user. Re-usable VPA components may be linked to or included in the ontology. An ontology populating agent may at least partially automate the process of populating the ontology with domain-specific information. The re-usable VPA components may be linked with the domain-specific information through the ontology. A VPA application created with the platform may include domain-adapted re-usable VPA components that may be called upon by an executable VPA engine to determine a likely intended meaning of conversational natural language input of the user and/or initiate an appropriate system response to the input.
Abstract:
A platform for developing a virtual personal assistant (“VPA”) application includes an ontology that defines a computerized structure for representing knowledge relating to one or more domains. A domain may refer to a category of information and/or activities in relation to which the VPA application may engage in a conversational natural language dialog with a computing device user. Re-usable VPA components may be linked to or included in the ontology. An ontology populating agent may at least partially automate the process of populating the ontology with domain-specific information. The re-usable VPA components may be linked with the domain-specific information through the ontology. A VPA application created with the platform may include domain-adapted re-usable VPA components that may be called upon by an executable VPA engine to determine a likely intended meaning of conversational natural language input of the user and/or initiate an appropriate system response to the input.
Abstract:
A virtual personal assistant (VPA) application analyzes intents to, among other things, enhance or personalize a user's dialog experience with the VPA application. A set of intents, or multiple sets of intents, are maintained over the course of one or more user-specific dialog sessions with the VPA. Inferences may be derived from the set or sets of intents and incorporated into a current or future dialog session between the VPA and a user of the VPA application. In some embodiments, the inferences are only made available through the systemic understanding of natural language discourse by the VPA.
Abstract:
A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user.
Abstract:
A platform for developing a virtual personal assistant (“VPA”) application includes an ontology that defines a computerized structure for representing knowledge relating to one or more domains. A domain may refer to a category of information and/or activities in relation to which the VPA application may engage in a conversational natural language dialog with a computing device user. Re-usable VPA components may be linked to or included in the ontology. An ontology populating agent may at least partially automate the process of populating the ontology with domain-specific information. The re-usable VPA components may be linked with the domain-specific information through the ontology. A VPA application created with the platform may include domain-adapted re-usable VPA components that may be called upon by an executable VPA engine to determine a likely intended meaning of conversational natural language input of the user and/or initiate an appropriate system response to the input.
Abstract:
A dialog assistant embodied in a computing system can present a clarification question based on a machine-readable version of human-generated conversational natural language input. Some versions of the dialog assistant identify a clarification target in the machine-readable version, determine a clarification type relating to the clarification target, present the clarification question in a conversational natural language manner, and process a human-generated conversational natural language response to the clarification question.
Abstract:
A method and apparatus for training and guiding users comprising generating a scene understanding based on video and audio input of a scene of a user performing a task in the scene, correlating the scene understanding with a knowledge base to produce a task understanding, comprising one or more goals, of a current activity of the user, reasoning, based on the task understanding and a user's current state, a next step for advancing the user towards completing one of the one or more goals of the task understanding and overlaying the scene with an augmented reality view comprising one or more visual and audio representation of the next step to the user.
Abstract:
A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user.