GENERATIVE MEMORY FOR LIFELONG MACHINE LEARNING

    公开(公告)号:US20200302339A1

    公开(公告)日:2020-09-24

    申请号:US16825953

    申请日:2020-03-20

    Abstract: Techniques are disclosed for training machine learning systems. An input device receives training data comprising pairs of training inputs and training labels. A generative memory assigns training inputs to each archetype task of a plurality of archetype tasks, each archetype task representative of a cluster of related tasks within a task space and assigns a skill to each archetype task. The generative memory generates, from each archetype task, auxiliary data comprising pairs of auxiliary inputs and auxiliary labels. A machine learning system trains a machine learning model to apply a skill assigned to an archetype task to training and auxiliary inputs assigned to the archetype task to obtain output labels corresponding to the training and auxiliary labels associated with the training and auxiliary inputs assigned to the archetype task to enable scalable learning to obtain labels for new tasks for which the machine learning model has not previously been trained.

    CAUSAL ANALYSIS WITH TIME SERIES DATA

    公开(公告)号:US20250110989A1

    公开(公告)日:2025-04-03

    申请号:US18895080

    申请日:2024-09-24

    Abstract: In general, various aspects of the techniques are directed to causal analysis using large scale time series data. A computing system may convert large scale time series data to first time period records and second time period records according to a multi-scale time resolution. The computing system may implement a hierarchical machine learning model to generate embeddings that capture temporal characteristics of features of the large scale time series data. The computing system may generate a graph data structure indicating cause and effect correlations between features of the large scale time series data based on temporal dynamics captured in the cause and second time period records and/or the embeddings.

    Runtime-throttleable neural networks

    公开(公告)号:US11494626B2

    公开(公告)日:2022-11-08

    申请号:US16600154

    申请日:2019-10-11

    Abstract: In general, the disclosure describes techniques for creating runtime-throttleable neural networks (TNNs) that can adaptively balance performance and resource use in response to a control signal. For example, runtime-TNNs may be trained to be throttled via a gating scheme in which a set of disjoint components of the neural network can be individually “turned off” at runtime without significantly affecting the accuracy of NN inferences. A separate gating neural network may be trained to determine which trained components of the NN to turn off to obtain operable performance for a given level of resource use of computational, power, or other resources by the neural network. This level can then be specified by the control signal at runtime to adapt the NN to operate at the specified level and in this way balance performance and resource use for different operating conditions.

    System and method for content comprehension and response

    公开(公告)号:US11934793B2

    公开(公告)日:2024-03-19

    申请号:US17516409

    申请日:2021-11-01

    CPC classification number: G06F40/35 G06F16/3335 G06N5/04

    Abstract: A method, apparatus and system for training an embedding space for content comprehension and response includes, for each layer of a hierarchical taxonomy having at least two layers including respective words resulting in layers of varying complexity, determining a set of words associated with a layer of the hierarchical taxonomy, determining a question answer pair based on a question generated using at least one word of the set of words and at least one content domain, determining a vector representation for the generated question and for content related to the at least one content domain of the question answer pair, and embedding the question vector representation and the content vector representations into a common embedding space where vector representations that are related, are closer in the embedding space than unrelated embedded vector representations. Requests for content can then be fulfilled using the trained, common embedding space.

    Generative memory for lifelong machine learning

    公开(公告)号:US11494597B2

    公开(公告)日:2022-11-08

    申请号:US16825953

    申请日:2020-03-20

    Abstract: Techniques are disclosed for training machine learning systems. An input device receives training data comprising pairs of training inputs and training labels. A generative memory assigns training inputs to each archetype task of a plurality of archetype tasks, each archetype task representative of a cluster of related tasks within a task space and assigns a skill to each archetype task. The generative memory generates, from each archetype task, auxiliary data comprising pairs of auxiliary inputs and auxiliary labels. A machine learning system trains a machine learning model to apply a skill assigned to an archetype task to training and auxiliary inputs assigned to the archetype task to obtain output labels corresponding to the training and auxiliary labels associated with the training and auxiliary inputs assigned to the archetype task to enable scalable learning to obtain labels for new tasks for which the machine learning model has not previously been trained.

Patent Agency Ranking