-
公开(公告)号:US10755172B2
公开(公告)日:2020-08-25
申请号:US15630944
申请日:2017-06-22
Applicant: Massachusetts Institute of Technology
Inventor: Otkrist Gupta , Ramesh Raskar
Abstract: A deep neural network may be trained on the data of one or more entities, also know as Alices. An outside computing entity, also known as a Bob, may assist in these computations, without receiving access to Alices' data. Data privacy may be preserved by employing a “split” neural network. The network may comprise an Alice part and a Bob part. The Alice part may comprise at least three neural layers, and the Bob part may comprise at least two neural layers. When training on data of an Alice, that Alice may input her data into the Alice part, perform forward propagation though the Alice part, and then pass output activations for the final layer of the Alice part to Bob. Bob may then forward propagate through the Bob part. Similarly, backpropagation may proceed backwards through the Bob part, and then through the Alice part of the network.
-
公开(公告)号:US11669737B2
公开(公告)日:2023-06-06
申请号:US16934685
申请日:2020-07-21
Applicant: Massachusetts Institute of Technology
Inventor: Otkrist Gupta , Ramesh Raskar
Abstract: A deep neural network may be trained on the data of one or more entities, also know as Alices. An outside computing entity, also known as a Bob, may assist in these computations, without receiving access to Alices' data. Data privacy may be preserved by employing a “split” neural network. The network may comprise an Alice part and a Bob part. The Alice part may comprise at least three neural layers, and the Bob part may comprise at least two neural layers. When training on data of an Alice, that Alice may input her data into the Alice part, perform forward propagation though the Alice part, and then pass output activations for the final layer of the Alice part to Bob. Bob may then forward propagate through the Bob part. Similarly, backpropagation may proceed backwards through the Bob part, and then through the Alice part of the network.
-
公开(公告)号:US11481635B2
公开(公告)日:2022-10-25
申请号:US16862494
申请日:2020-04-29
Applicant: Massachusetts Institute of Technology , Otkrist Gupta
Inventor: Praneeth Vepakomma , Abhishek Singh , Otkrist Gupta , Ramesh Raskar
IPC: G06N3/08
Abstract: A distributed deep learning network may prevent an attacker from reconstructing raw data from activation outputs of an intermediate layer of the network. To achieve this, the loss function of the network may tend to reduce distance correlation between raw data and the activation outputs. For instance, the loss function may be the sum of two terms, where the first term is weighted distance correlation between raw data and activation outputs of a split layer of the network, and the second term is weighted categorical cross entropy of actual labels and label predictions. Distance correlation with the entire raw data may be minimized. Alternatively, distance correlation with only with certain features of the raw data may be minimized, in order to ensure attribute-level privacy. In some cases, a client computer calculates decorrelated representations of raw data before sharing information about the data with external computers.
-
公开(公告)号:US20200349443A1
公开(公告)日:2020-11-05
申请号:US16862494
申请日:2020-04-29
Applicant: Otkrist Gupta , Massachusetts Institute of Technology
Inventor: Praneeth Vepakomma , Abhishek Singh , Otkrist Gupta , Ramesh Raskar
IPC: G06N3/08
Abstract: A distributed deep learning network may prevent an attacker from reconstructing raw data from activation outputs of an intermediate layer of the network. To achieve this, the loss function of the network may tend to reduce distance correlation between raw data and the activation outputs. For instance, the loss function may be the sum of two terms, where the first term is weighted distance correlation between raw data and activation outputs of a split layer of the network, and the second term is weighted categorical cross entropy of actual labels and label predictions. Distance correlation with the entire raw data may be minimized. Alternatively, distance correlation with only with certain features of the raw data may be minimized, in order to ensure attribute-level privacy. In some cases, a client computer calculates decorrelated representations of raw data before sharing information about the data with external computers.
-
公开(公告)号:US20170372201A1
公开(公告)日:2017-12-28
申请号:US15630944
申请日:2017-06-22
Applicant: Massachusetts Institute of Technology
Inventor: Otkrist Gupta , Ramesh Raskar
CPC classification number: G06N3/08 , G06N3/0454 , G06N3/084 , G06N20/00
Abstract: A deep neural network may be trained on the data of one or more entities, also know as Alices. An outside computing entity, also known as a Bob, may assist in these computations, without receiving access to Alices' data. Data privacy may be preserved by employing a “split” neural network. The network may comprise an Alice part and a Bob part. The Alice part may comprise at least three neural layers, and the Bob part may comprise at least two neural layers. When training on data of an Alice, that Alice may input her data into the Alice part, perform forward propagation though the Alice part, and then pass output activations for the final layer of the Alice part to Bob. Bob may then forward propagate through the Bob part. Similarly, backpropagation may proceed backwards through the Bob part, and then through the Alice part of the network.
-
公开(公告)号:US20200349435A1
公开(公告)日:2020-11-05
申请号:US16934685
申请日:2020-07-21
Applicant: Massachusetts Institute of Technology
Inventor: Otkrist Gupta , Ramesh Raskar
Abstract: A deep neural network may be trained on the data of one or more entities, also know as Alices. An outside computing entity, also known as a Bob, may assist in these computations, without receiving access to Alices' data. Data privacy may be preserved by employing a “split” neural network. The network may comprise an Alice part and a Bob part. The Alice part may comprise at least three neural layers, and the Bob part may comprise at least two neural layers. When training on data of an Alice, that Alice may input her data into the Alice part, perform forward propagation though the Alice part, and then pass output activations for the final layer of the Alice part to Bob. Bob may then forward propagate through the Bob part. Similarly, backpropagation may proceed backwards through the Bob part, and then through the Alice part of the network.
-
-
-
-
-