Abstract:
A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user.
Abstract:
A computing system for virtual personal assistance includes technologies to, among other things, correlate an external representation of an object with a real world view of the object, display virtual elements on the external representation of the object and/or display virtual elements on the real world view of the object, to provide virtual personal assistance in a multi-step activity or another activity that involves the observation or handling of an object and a reference document.
Abstract:
Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information. The virtual personal assistant can further be configured to determine an action using the current intent and the current input state.
Abstract:
An identification system includes a radar sensor configured to generate a time-domain or frequency-domain signal representative of electromagnetic waves reflected from one or more objects within a three-dimensional space over a period of time and a computation engine executing on one or more processors. The computation engine is configured to process the time-domain or frequency-domain signal to generate range and velocity data indicating motion by a living subject within the three-dimensional space. The computation engine is further configured to identify, based at least on the range and velocity data indicating the motion by the living subject, the living subject and output an indication of an identity of the living subject.
Abstract:
A conversational assistant for conversational engagement platform can contain various modules including a user-model augmentation module, a dialogue management module, and a user-state analysis input/output module. The dialogue management module receives metrics tied to a user from the other modules to understand a current topic and a user's emotions regarding the current topic from the user-state analysis input/output module and then adapts dialogue from the dialogue management module to the user based on dialogue rules factoring in these different metrics. The dialogue rules also factors in both i) a duration of a conversational engagement with the user and ii) an attempt to maintain a positive experience for the user with the conversational engagement. A flexible ontology relationship representation about the user is built and stores learned metrics about the user over time with each conversational engagement, and then in combination with the dialogue rules, drives the conversations with the user.
Abstract:
Provided are systems, computer-implemented methods, and computer-program products for a multi-lingual device, capable of receiving verbal input in multiple languages, and further capable of providing conversational responses in multiple languages. In various implementations, the multi-lingual device includes an automatic speech recognition engine capable of receiving verbal input in a first natural language and providing a textual representation of the input and a confidence value for the recognition. The multi-lingual device can also include a machine translation engine, capable of translating textual input from the first natural language into a second natural language. The machine translation engine can output a confidence value for the translation. The multi-lingual device can further include natural language processing, capable of translating from the second natural language to a computer-based language. Input in the computer-based language can be processed, and the multi-lingual device can take an action based on the result of the processing.
Abstract:
Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information. The virtual personal assistant can further be configured to determine an action using the current intent and the current input state.
Abstract:
Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information. The virtual personal assistant can further be configured to determine an action using the current intent and the current input state.
Abstract:
An identification system includes a radar sensor configured to generate a time-domain or frequency-domain signal representative of electromagnetic waves reflected from one or more objects within a three-dimensional space over a period of time and a computation engine executing on one or more processors. The computation engine is configured to process the time-domain or frequency-domain signal to generate range and velocity data indicating motion by a living subject within the three-dimensional space. The computation engine is further configured to identify, based at least on the range and velocity data indicating the motion by the living subject, the living subject and output an indication of an identity of the living subject.
Abstract:
Disclosed techniques can generate content object summaries. Content of a content object can be parsed into a set of word groups. For each word group, at least one topic to which the word group pertains can be identified and it can be determined, via a user model, at least one weight of the plurality of weights corresponding to the topic(s). For each word group, a score can be determined for the word group based on the weight(s). A subset of the set of word groups can be selected based on the scores for the word group. A summary of the content object can be generated that includes the subset but that does not include one or more other word groups in the set of word groups that are not in the subset. At least part of the summary of the content object can be output.