Distributed learning preserving model security
Abstract:
Distributed machine learning employs a central fusion server that coordinates the distributed learning process. Preferably, each of set of learning agents that are typically distributed from one another initially obtains initial parameters for a model from the fusion server. Each agent trains using a dataset local to the agent. The parameters that result from this local training (for a current iteration) are then passed back to the fusion server in a secure manner, and a partial homomorphic encryption scheme is then applied. In particular, the fusion server fuses the parameters from all the agents, and it then shares the results with the agents for a next iteration. In this approach, the model parameters are secured using the encryption scheme, thereby protecting the privacy of the training data, even from the fusion server itself.
Information query
Patent Agency Ranking
0/0