NEURAL NETWORK TRAINING SYSTEM
    1.
    发明申请

    公开(公告)号:US20220114449A1

    公开(公告)日:2022-04-14

    申请号:US17499972

    申请日:2021-10-13

    Abstract: A computing device trains a neural network machine learning model. A forward propagation of a first neural network is executed. A backward propagation of the first neural network is executed from a last layer to a last convolution layer to compute a gradient vector. A discriminative localization map is computed for each observation vector with the computed gradient vector using a discriminative localization map function. An activation threshold value is selected for each observation vector from at least two different values based on a prediction error of the first neural network. A biased feature map is computed for each observation vector based on the activation threshold value selected for each observation vector. A masked observation vector is computed for each observation vector using the biased feature map. A forward and a backward propagation of a second neural network is executed a predefined number of iterations using the masked observation vector.

    DISTRIBUTABLE CLUSTERING MODEL TRAINING SYSTEM

    公开(公告)号:US20200372387A1

    公开(公告)日:2020-11-26

    申请号:US16880551

    申请日:2020-05-21

    Abstract: A computing system trains a clustering model. A responsibility parameter vector is initialized for each observation vector and includes a probability value of a cluster membership. The observation vectors include a plurality of classified observation vectors and a plurality of unclassified observation vectors. (A) Beta distribution parameter values are computed for each cluster. (B) Parameter values are computed for a normal-Wishart distribution for each cluster. (C) Each responsibility parameter vector is updated using the beta distribution parameter values, the parameter values, and a respective observation vector. (D) A convergence parameter value is computed. (E) (A) to (D) are repeated until the computed convergence parameter value indicates the responsibility parameter vector defined for each observation vector of the plurality of unclassified observation vectors is converged. A cluster membership is determined and output for each observation vector using a respective, updated responsibility parameter vector.

    Advanced training of machine-learning models usable in control systems and other systems

    公开(公告)号:US10832087B1

    公开(公告)日:2020-11-10

    申请号:US16921417

    申请日:2020-07-06

    Abstract: Machine-learning models (MLM) can be configured more rapidly and accurately according to some examples. For example, a system can receive a first training dataset that includes (i) independent-variable values corresponding to independent variables and (ii) dependent-variable values corresponding to a dependent variable that is influenced by the independent variables. The independent-variable values can include nonlinear-variable values corresponding to at least one nonlinear independent variable. The system can then determine cluster assignments for the nonlinear-variable values, generate a second training dataset based on the cluster assignments, and train a model based on the second training dataset. The trained machine-learning model may then be used in various applications, such as control-system applications.

    Distributed classification computing system

    公开(公告)号:US11227223B1

    公开(公告)日:2022-01-18

    申请号:US17368941

    申请日:2021-07-07

    Inventor: Yingjian Wang

    Abstract: A computing system trains a classification model using distributed training data. In response to receipt of a first request, a training data subset is accessed and sent to each higher index worker computing device, the training data subset sent by each lower index worker computing device is received, and a first kernel matrix block and a second kernel matrix block are computed using a kernel function and the accessed or received training data subsets. (A) In response to receipt of a second request from the controller device, a first vector is computed using the first and second kernel matrix blocks, a latent function vector and an objective function value are computed, and the objective function value is sent to the controller device. (A) is repeated until the controller device determines training of a classification model is complete. Model parameters for the trained classification model are output.

    DISTRIBUTABLE CLUSTERING MODEL TRAINING SYSTEM

    公开(公告)号:US20210142192A1

    公开(公告)日:2021-05-13

    申请号:US16950041

    申请日:2020-11-17

    Abstract: A computing system trains a clustering model. (A) Beta distribution parameter values are computed for each cluster using a mass parameter value and a responsibility parameter vector of each observation vector. (B) Parameter values are computed for a normal-Wishart distribution for each observation vector included in a batch of a plurality of observation vectors. (C) Each responsibility parameter vector defined for each observation vector of the batch is updated using the beta distribution parameter values, the parameter values for the normal-Wishart distribution, and a respective observation vector of the selected batch of plurality of observation vectors. (D) A convergence parameter value is computed. (E) (A) to (D) are repeated until the convergence parameter value indicates the responsibility parameter vector defined for each observation vector is converged. A cluster membership is determined for each observation vector using the responsibility parameter vector. The determined cluster membership is output for each observation vector.

    Distributed gaussian process classification computing system

    公开(公告)号:US12175374B1

    公开(公告)日:2024-12-24

    申请号:US18635410

    申请日:2024-04-15

    Abstract: A computing system trains a classification model using distributed training data. A first worker index and a second worker index are received from a controller device and together uniquely identify a segment of a lower triangular matrix. The first and second worker indices have values from one to a predefined block size value. In response to receipt of a first computation request from the controller device, a first kernel matrix block is computed at each computing device based on the first worker index and the second worker index. In response to receipt of a second computation request from the controller device, an objective function value is computed for each observation vector included in an accessed training data subset. The computed objective function value is sent to the controller device. Model parameters for a trained classification model are output.

    Distributable clustering model training system

    公开(公告)号:US10586165B1

    公开(公告)日:2020-03-10

    申请号:US16562607

    申请日:2019-09-06

    Inventor: Yingjian Wang

    Abstract: A computing system trains a clustering model. A responsibility parameter vector is initialized for each observation vector that includes a probability value of a cluster membership in each cluster. (A) Beta distribution parameter values are computed for each cluster. (B) Parameter values are computed for a normal-Wishart distribution for each cluster. (C) Each responsibility parameter vector defined for each observation vector is updated using the computed beta distribution parameter values, the computed parameter values for the normal-Wishart distribution, and a respective observation vector of the plurality of observation vectors. (D) A convergence parameter value is computed. (E) (A) to (D) are repeated until the computed convergence parameter value indicates the responsibility parameter vector defined for each observation vector is converged. A cluster membership is determined for each observation vector using a respective, updated responsibility parameter vector. The determined cluster membership is output for each observation vector.

    Neural network training system
    8.
    发明授权

    公开(公告)号:US11195084B1

    公开(公告)日:2021-12-07

    申请号:US17198737

    申请日:2021-03-11

    Abstract: A computing device trains a neural network machine learning model. A forward propagation of a first neural network is executed. A backward propagation of the first neural network is executed from a last layer to a last convolution layer of a plurality of convolutional layers to compute a gradient vector for first weight values of the last convolution layer using observation vectors. A discriminative localization map is computed for each observation vector with the gradient vector using a discriminative localization map function. A forward and a backward propagation of a second neural network is executed to compute a second weight value for each neuron of the second neural network using the discriminative localization map computed for each observation vector. A predefined number of iterations of the forward and the backward propagation of the second neural network is repeated.

    Distributable clustering model training system

    公开(公告)号:US11055620B2

    公开(公告)日:2021-07-06

    申请号:US16950041

    申请日:2020-11-17

    Abstract: A computing system trains a clustering model. (A) Beta distribution parameter values are computed for each cluster using a mass parameter value and a responsibility parameter vector of each observation vector. (B) Parameter values are computed for a normal-Wishart distribution for each observation vector included in a batch of a plurality of observation vectors. (C) Each responsibility parameter vector defined for each observation vector of the batch is updated using the beta distribution parameter values, the parameter values for the normal-Wishart distribution, and a respective observation vector of the selected batch of plurality of observation vectors. (D) A convergence parameter value is computed. (E) (A) to (D) are repeated until the convergence parameter value indicates the responsibility parameter vector defined for each observation vector is converged. A cluster membership is determined for each observation vector using the responsibility parameter vector. The determined cluster membership is output for each observation vector.

Patent Agency Ranking