Facilitating neural network efficiency

    公开(公告)号:GB2581728A

    公开(公告)日:2020-08-26

    申请号:GB202006969

    申请日:2018-10-04

    Applicant: IBM

    Abstract: Techniques that facilitate improving an efficiency of a neural network are described. In one embodiment, a system is provided that comprises a memory that stores computer-executable components and a processor that executes computer-executable components stored in the memory. In one implementation, the computer-executable components comprise an initialization component that selects an initial value of an output limit, wherein the output limit indicates a range for an output of an activation function of a neural network. The computer-executable components further comprise a training component that modifies the initial value of the output limit during training to a second value of the output limit, the second value of the output limit being provided as a parameter to the activation function. The computer-executable components further comprise an activation function component that determines the output of the activation function based on the second value of the output limit as the parameter.

    Low precision deep neural network enabled by compensation instructions

    公开(公告)号:GB2590000A

    公开(公告)日:2021-06-16

    申请号:GB202100363

    申请日:2019-06-13

    Applicant: IBM

    Abstract: A compensated deep neural network (compensated-DNN) is provided. A first vector having a set of components and a second vector having a set of corresponding components are received. A component of the first vector includes a first quantized value and a first compensation instruction,and a corresponding component of the second vector includes a second quantized value and a second compensation instruction. The first quantized value is multiplied with the second quantized value to compute a raw product value. The raw product value is compensated for a quantization error according to the first and second compensation instructions to produce a compensated product value. The compensated product value is added into an accumulated value for the dot product. The accumulated value is converted into an output vector of the dot product. The output vector includes an output quantized value and an output compensation instruction.

    Low precision deep neural network enabled by compensation instructions

    公开(公告)号:GB2590000B

    公开(公告)日:2022-12-07

    申请号:GB202100363

    申请日:2019-06-13

    Applicant: IBM

    Abstract: A compensated deep neural network (compensated-DNN) is provided. A first vector having a set of components and a second vector having a set of corresponding components are received. A component of the first vector includes a first quantized value and a first compensation instruction, and a corresponding component of the second vector includes a second quantized value and a second compensation instruction. The first quantized value is multiplied with the second quantized value to compute a raw product value. The raw product value is compensated for a quantization error according to the first and second compensation instructions to produce a compensated product value. The compensated product value is added into an accumulated value for the dot product. The accumulated value is converted into an output vector of the dot product. The output vector includes an output quantized value and an output compensation instruction.

    Robust Gradient weight compression schemes for deep learning applications

    公开(公告)号:GB2582232A

    公开(公告)日:2020-09-16

    申请号:GB202009717

    申请日:2018-11-30

    Applicant: IBM

    Abstract: Embodiments of the present invention provide a computer-implemented method for adaptive residual gradient compression for training of a deep learning neural network (DNN). The method includes obtaining, by a first learner, a current gradient vector for a neural network layer of the DNN, in which the current gradient vector includes gradient weights of parameters of the neural network layer that are calculated from a mini-batch of training data. A current residue vector is generated that includes residual gradient weights for the mini-batch. A compressed current residue vector is generated based on dividing the residual gradient weights of the current residue vector into a plurality of bins of a uniform size and quantizing a subset of the residual gradient weights of one or more bins of the plurality of bins. The compressed current residue vector is then transmitted to a second learner of the plurality of learners or to a parameter server.

    System-aware selective quantization for performance optimized distributed deep learning

    公开(公告)号:GB2600872A

    公开(公告)日:2022-05-11

    申请号:GB202201906

    申请日:2020-07-17

    Applicant: IBM

    Abstract: A convolutional neural network includes a front layer, a back layer, and a plurality of other layers that are connected between the front layer and the back layer. One of the other layers is a transition layer. A first precision is assigned to activations of neurons from the front layer back to the transition layer and a second precision is assigned to activations of the neurons from the transition layer back to the back layer. A third precision is assigned to weights of inputs to neurons from the front layer back to the transition layer and a fourth precision is assigned to weights of inputs to the neurons from the transition layer back to the back layer. In some embodiments the layers forward of the transition layer have a different convolutional kernel than the layers rearward of the transition layer.

    Machine learning hardware having reduced precision parameter components for efficient parameter update

    公开(公告)号:GB2600871A

    公开(公告)日:2022-05-11

    申请号:GB202201893

    申请日:2020-08-17

    Applicant: IBM

    Abstract: An apparatus for training and inferencing a neural network includes circuitry that is configured to generate a first weight having a first format including a first number of bits based at least in part on a second weight having a second format including a second number of bits and a residual having a third format including a third number of bits. The second number of bits and the third number of bits are each less than the first number of bits. The circuitry is further configured to update the second weight based at least in part on the first weight and to update the residual based at least in part on the updated second weight and the first weight. The circuitry is further configured to update the first weight based at least in part on the updated second weight and the updated residual.

    Compression of fully connected/recurrent layers of deep network(s) through enforcing spatial locality to weight matrices and effecting frequency compression

    公开(公告)号:GB2582233A

    公开(公告)日:2020-09-16

    申请号:GB202009750

    申请日:2018-11-30

    Applicant: IBM

    Abstract: A system, having a memory that stores computer executable components, and a processor that executes the computer executable components, reduces data size in connection with training a neural network by exploiting spatial locality to weight matrices and effecting frequency transformation and compression. A receiving component receives neural network data in the form of a compressed frequency-domain weight matrix. A segmentation component segments the initial weight matrix into original sub-components, wherein respective original sub-components have spatial weights. A sampling component applies a generalized weight distribution to the respective original sub-components to generate respective normalized sub-components. A transform component applies a transform to the respective normalized sub-components. A cropping component crops high-frequency weights of the respective transformed normalized sub-components to yield a set of low-frequency normalized sub-components to generate a compressed representation of the original sub-components.

Patent Agency Ranking