ASYNCHRONOUSLY TRAINING MACHINE LEARNING MODELS ACROSS CLIENT DEVICES FOR ADAPTIVE INTELLIGENCE

    公开(公告)号:US20190385043A1

    公开(公告)日:2019-12-19

    申请号:US16012356

    申请日:2018-06-19

    Applicant: Adobe Inc.

    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that asynchronously train a machine learning model across client devices that implement local versions of the model while preserving client data privacy. To train the model across devices, in some embodiments, the disclosed systems send global parameters for a global machine learning model from a server device to client devices. A subset of the client devices uses local machine learning models corresponding to the global model and client training data to modify the global parameters. Based on those modifications, the subset of client devices sends modified parameter indicators to the server device for the server device to use in adjusting the global parameters. By utilizing the modified parameter indicators (and not client training data), in certain implementations, the disclosed systems accurately train a machine learning model without exposing training data from the client device.

    Managing machine learning model reconstruction

    公开(公告)号:US11829239B2

    公开(公告)日:2023-11-28

    申请号:US17455364

    申请日:2021-11-17

    Applicant: ADOBE INC.

    CPC classification number: G06F11/1004 G06F11/1088

    Abstract: A method performed by one or more processors that preserves a machine learning model comprises accessing model parameters associated with a machine learning model. The model parameters are determined responsive to training the machine learning model. The method comprises generating a plurality of model parameter sets, where each of the plurality of model parameter sets comprises a separate portion of the set of model parameters. The method comprises determining one or more parity sets comprising values calculated from the plurality of model parameter sets. The method comprises distributing the plurality of model parameter sets and the one or more parity sets among a plurality of computing devices, where each of the plurality of computing devices stores a model parameter set of the plurality of model parameter sets or a parity set of the one or more parity sets. The method comprises accessing, from the plurality of computing devices, a number of sets comprising model parameter sets and at least one parity set. The method comprises reconstructing the machine learning model from the number of sets accessed from the plurality of computing devices.

    SCHEDULING JOBS ON INTERRUPTIBLE CLOUD COMPUTING INSTANCES

    公开(公告)号:US20220374276A1

    公开(公告)日:2022-11-24

    申请号:US17324692

    申请日:2021-05-19

    Applicant: Adobe Inc.

    Abstract: Techniques are provided for scheduling multiple jobs on one or more cloud computing instances, which provide the ability to select a job for execution from among a plurality of jobs, and to further select a designated instance from among a plurality of cloud computing instances for executing the selected job. The job and the designated instance are each selected based on a probability distribution that a cost of executing the job on the designated instance does not exceed the budget. The probability distribution is based on several factors including a cost of prior executions of other jobs on the designated instance and a utility function that represents a value associated with a progress of each job. By scheduling select jobs on discounted cloud computing instances, the aggregate utility of the jobs can be maximized or otherwise improved for a given budget.

    Updating machine learning models on edge servers

    公开(公告)号:US11170320B2

    公开(公告)日:2021-11-09

    申请号:US16040057

    申请日:2018-07-19

    Applicant: Adobe Inc.

    Abstract: Systems and techniques are described herein for updating a machine learning model on edge servers. Local parameters of the machine learning model are updated at a plurality of edge servers using fresh data on the edge servers, rather than waiting for the data to reach a global server to update the machine learning model. Hence, latency is significantly reduced, making the systems and techniques described herein suitable for real-time services that support streaming data. Moreover, by updating global parameters of the machine learning model at a global server in a deterministic manner based on parameter updates from the edge servers, rather than by including randomization steps, global parameters of the converge quickly to their optimal values. The global parameters are sent from the global server to the plurality of edge servers at each iteration, thereby synchronizing the machine learning model on the edge servers.

    COOPERATIVE PLATFORM FOR GENERATING, SECURING, AND VERIFYING DEVICE GRAPHS AND CONTRIBUTIONS TO DEVICE GRAPHS

    公开(公告)号:US20190190701A1

    公开(公告)日:2019-06-20

    申请号:US15845948

    申请日:2017-12-18

    Applicant: ADOBE INC.

    CPC classification number: H04L9/088 G06N5/022 G06N7/005

    Abstract: Graphing services are provided to a device cooperative that includes data contributors, e.g., website hosts. Anonymized user data, provided by the data contributors, is accessed, via a blockchain, decrypted, and aggregated. A device graph is generated based on the aggregated user data. Contribution metrics are provided to the data contributors. A first contribution metric for a first data contributor indicates a contribution to the device graph of a first portion of the user data that was provided by the first data contributor. In response to receiving a request for a verification of the first contribution metric, a zero knowledge proof of the first contribution metric is generated and provided to the first data contributor. The first data contributor is enabled to evaluate the zero knowledge proof independent of access to a second portion of the user data that was provided by a second data contributor of the device cooperative.

Patent Agency Ranking