Abstract:
A computing system is configured to obtain a video that includes text elements and visual elements. The computing system is further configured to generate a plurality of text tokens representative of audio spoken in the video and a plurality of frame tokens representative of one or more frames of the video. The computing system is further configured to generate a set of features that includes a text feature, a frame feature, and a multi-modal feature, wherein the multi-modal feature is representative of multi-modal elements of the video, and wherein generating the set of features is based on the plurality of text tokens and the plurality of frame tokens. The computing system is further configured to associate the set of features with one or more labels to generate a multi-label classification of the video. The computing system is further configured to output an indication of the multi-label classification of the video.
Abstract:
A computing system comprising a memory configured to store an artificial intelligence (AI) model and an image, and a computation engine executing one or more processors may be configured to perform the techniques for error-based explanations for AI behavior. The computation engine may execute the AI model to analyze the image to output a result. The AI model may, when analyzing the image to output the result, process, based on data indicative of the result, the image to assign an error score to each image feature extracted from the image, and obtain, based on the error scores, an error map. The AI model may next update, based on the error map and to obtain a first updated image, the image to visually indicate the error score assigned to each of the image features, and output one or more of the error scores, the error map, and the first updated image.
Abstract:
Technologies to detect persuasive multimedia content by using affective and semantic concepts extracted from the audio-visual content as well as the sentiment of associated comments are disclosed. The multimedia content is analyzed and compared with a persuasiveness model.
Abstract:
A food recognition assistant system includes technologies to recognize foods and combinations of foods depicted in a digital picture of food. Some embodiments include technologies to estimate portion size and calories, and to estimate nutritional value of the foods. In some embodiments, data identifying recognized foods and related information are generated in an automated fashion without relying on human assistance to identify the foods. In some embodiments, the system includes technologies for achieving automatic food detection and recognition in a real-life setting with a cluttered background, without the images being taken in a controlled lab setting, and without requiring additional user input (such as user-defined bounding boxes). Some embodiments of the system include technologies for personalizing the food classification based on user-specific habits, location and/or other criteria.
Abstract:
Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information. The virtual personal assistant can further be configured to determine an action using the current intent and the current input state.
Abstract:
Embodiments of the present invention are directed towards methods and apparatus for generating a common operating picture of an event based on the event-specific information extracted from data collected from a plurality of electronic information sources. In some embodiments, a method for generating a common operating picture of an event includes collecting data, comprising image data and textual data, from a plurality of electronic information sources, extracting information related to an event from the data, said extracted information comprising image descriptors, visual features, and categorization tags, by applying statistical analysis and semantic analysis, aligning the extracted information to generate aligned information, recognizing event-specific information for the event based on the aligned information, and generating a common operating picture of the event based on the event-specific information.
Abstract:
A computer-implemented method comprising collecting data from a plurality of information sources, identifying a geographic location associated with the data and forming a corresponding event according to the geographic location, correlating the data and the event with one or more topics based at least partly on the identified geographic location and storing the correlated data and event and inferring the associated geographic location if the data does not comprise explicit location information, including matching the data against a database of geo-referenced data.
Abstract:
Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information. The virtual personal assistant can further be configured to determine an action using the current intent and the current input state.
Abstract:
In general, the disclosure describes techniques for characterizing a dynamical system and a neural ordinary differential equation (NODE)-based controller for the dynamical system. An example analysis system is configured to: obtain a set of parameters of a NODE model used to implement the NODE-based controller, the NODE model trained to control the dynamical system; determine, based on the set of parameters, a system property of a combined system comprising the dynamical system and the NODE-based controller, the system property comprising one or more of an accuracy, safety, reliability, reachability, or controllability of the combined system; and output the system property to modify one or more of the dynamical system or the NODE-based controller to meet a required specification for the combined system.
Abstract:
A method, apparatus and system for zero shot object detection includes, in a semantic embedding space having embedded object class labels, training the space by embedding extracted features of bounding boxes and object class labels of labeled bounding boxes of known object classes into the space, determining regions in an image having unknown object classes on which to perform object detection as proposed bounding boxes, extracting features of the proposed bounding boxes, projecting the extracted features of the proposed bounding boxes into the space, computing a similarity measure between the projected features of the proposed bounding boxes and the embedded, extracted features of the bounding boxes of the known object classes in the space, and predicting an object class label for proposed bounding boxes by determining a nearest embedded object class label to the projected features of the proposed bounding boxes in the space based on the similarity measures.