Training a model using parameter server shards
    1.
    发明授权
    Training a model using parameter server shards 有权
    使用参数服务器分片训练模型

    公开(公告)号:US09218573B1

    公开(公告)日:2015-12-22

    申请号:US13826327

    申请日:2013-03-14

    Applicant: Google Inc.

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a model using parameter server shards. One of the methods includes receiving, at a parameter server shard configured to maintain values of a disjoint partition of the parameters of the model, a succession of respective requests for parameter values from each of a plurality of replicas of the model; in response to each request, downloading a current value of each requested parameter to the replica from which the request was received; receiving a succession of uploads, each upload including respective delta values for each of the parameters in the partition maintained by the shard; and updating values of the parameters in the partition maintained by the parameter server shard repeatedly based on the uploads of delta values to generate current parameter values.

    Abstract translation: 方法,系统和装置,包括在计算机存储介质上编码的计算机程序,用于使用参数服务器分片训练模型。 其中一种方法包括在被配置为维持模型的参数的不相交分区的值的参数服务器分片上接收来自模型的多个副本中的每一个的参数值的相继请求; 响应于每个请求,将每个请求的参数的当前值下载到从其接收请求的副本; 接收连续的上传,每次上传包括由分片保存的分区中的每个参数的各自的增量值; 并且根据增量值的上载重复地更新由参数服务器分片保存的分区中的参数的值,以生成当前参数值。

    Training a model using parameter server shards
    2.
    发明授权
    Training a model using parameter server shards 有权
    使用参数服务器分片训练模型

    公开(公告)号:US08768870B1

    公开(公告)日:2014-07-01

    申请号:US13968019

    申请日:2013-08-15

    Applicant: Google Inc.

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a model using parameter server shards. One of the methods includes receiving, at a parameter server shard configured to maintain values of a disjoint partition of the parameters of the model, a succession of respective requests for parameter values from each of a plurality of replicas of the model; in response to each request, downloading a current value of each requested parameter to the replica from which the request was received; receiving a succession of uploads, each upload including respective delta values for each of the parameters in the partition maintained by the shard; and updating values of the parameters in the partition maintained by the parameter server shard repeatedly based on the uploads of delta values to generate current parameter values.

    Abstract translation: 方法,系统和装置,包括在计算机存储介质上编码的计算机程序,用于使用参数服务器分片训练模型。 其中一种方法包括在被配置为维持模型的参数的不相交分区的值的参数服务器分片上接收来自模型的多个副本中的每一个的参数值的相继请求; 响应于每个请求,将每个请求的参数的当前值下载到从其接收请求的副本; 接收连续的上传,每次上传包括由分片保存的分区中的每个参数的各自的增量值; 并且根据增量值的上载重复地更新由参数服务器分片保存的分区中的参数的值,以生成当前参数值。

    Distribution of parameter calculation for iterative optimization methods

    公开(公告)号:US09355067B1

    公开(公告)日:2016-05-31

    申请号:US14691362

    申请日:2015-04-20

    Applicant: Google Inc.

    Abstract: Systems and methods are disclosed for distributed first- or higher-order model fitting algorithms. Determination of the parameter set for the objective function is divided into a plurality of sub-processes, each performed by one of a plurality of worker computers. A master computer coordinates the operation of the plurality of worker computers, each operating on a portion of the parameter set such that no two worker computers contain exactly the same parameter subset nor the complete parameter set. Each worker computer performs its sub-processes on its parameter subset, together with training data. For maximum efficiency, the sub-processes are performed using a compact set of instruction primitives. The results are evaluated by the master computer, which may coordinate additional sub-process operations to perform higher-order optimization or terminate the optimization method and proceed to formulation of a model function.

Patent Agency Ranking