Apparatus, method, and computer program for improving performance of parallel computing system
    1.
    发明专利
    Apparatus, method, and computer program for improving performance of parallel computing system 有权
    用于提高并行计算系统性能的装置,方法和计算机程序

    公开(公告)号:JP2012168930A

    公开(公告)日:2012-09-06

    申请号:JP2012006116

    申请日:2012-01-16

    CPC classification number: G06F12/0837

    Abstract: PROBLEM TO BE SOLVED: To provide an apparatus, method, and computer program for improving performance of a parallel computing system.SOLUTION: A first local cache controller associated with a first local cache of a first processor detects occurrence of false sharing of a first cache line by a second processor running a program code and allows the false sharing of the first cache line by the second processor. The false sharing of the first cache line occurs upon updating a first portion of the first cache line in the first local cache by the first local cache controller and subsequent updating a second portion of the first cache line in a second local cache by a second local cache controller.

    Abstract translation: 要解决的问题:提供一种用于提高并行计算系统的性能的装置,方法和计算机程序。 解决方案:与第一处理器的第一本地高速缓存相关联的第一本地高速缓存控制器通过运行程序代码的第二处理器来检测第一高速缓存行的虚假共享的发生,并允许第一高速缓存行的错误共享由 第二处理器。 当第一本地高速缓存控制器更新第一本地高速缓存中的第一高速缓存行的第一部分并且随后通过第二本地高速缓存更新第二本地高速缓存中的第一高速缓存行的第二部分时,发生第一高速缓存行的错误共享 缓存控制器。 版权所有(C)2012,JPO&INPIT

    Low precision deep neural network enabled by compensation instructions

    公开(公告)号:GB2590000A

    公开(公告)日:2021-06-16

    申请号:GB202100363

    申请日:2019-06-13

    Applicant: IBM

    Abstract: A compensated deep neural network (compensated-DNN) is provided. A first vector having a set of components and a second vector having a set of corresponding components are received. A component of the first vector includes a first quantized value and a first compensation instruction,and a corresponding component of the second vector includes a second quantized value and a second compensation instruction. The first quantized value is multiplied with the second quantized value to compute a raw product value. The raw product value is compensated for a quantization error according to the first and second compensation instructions to produce a compensated product value. The compensated product value is added into an accumulated value for the dot product. The accumulated value is converted into an output vector of the dot product. The output vector includes an output quantized value and an output compensation instruction.

    Predicting out of order instruction level parallelism of threads in a multi-threaded processor

    公开(公告)号:GB2492457A

    公开(公告)日:2013-01-02

    申请号:GB201210975

    申请日:2012-06-21

    Applicant: IBM

    Abstract: The application discloses systems and methods for predicting out-of-order instruction-level parallelism (ILP) of threads being executed in a multi-threaded processor and. prioritizing scheduling thereof. One aspect provides for tracking completion of thread instructions using a global completion table having a head segment and a tail segment. Prediction values for each instruction are stored in a prediction table and indexed via instruction identifiers associated with each instruction. The prediction value being configured to indicate that an instruction is predicted to issue from either the head or tail segment and predicting that threads with more instructions issuing from the tail segment have a higher degree of out-of-order instruction-level parallelism. Further, the out-of-order instruction level parallelism prediction is used to schedule the instructions.

    Low precision deep neural network enabled by compensation instructions

    公开(公告)号:GB2590000B

    公开(公告)日:2022-12-07

    申请号:GB202100363

    申请日:2019-06-13

    Applicant: IBM

    Abstract: A compensated deep neural network (compensated-DNN) is provided. A first vector having a set of components and a second vector having a set of corresponding components are received. A component of the first vector includes a first quantized value and a first compensation instruction, and a corresponding component of the second vector includes a second quantized value and a second compensation instruction. The first quantized value is multiplied with the second quantized value to compute a raw product value. The raw product value is compensated for a quantization error according to the first and second compensation instructions to produce a compensated product value. The compensated product value is added into an accumulated value for the dot product. The accumulated value is converted into an output vector of the dot product. The output vector includes an output quantized value and an output compensation instruction.

    Hybrid data-model parallelism for efficient deep learning

    公开(公告)号:GB2604060A

    公开(公告)日:2022-08-24

    申请号:GB202206096

    申请日:2020-09-29

    Applicant: IBM

    Abstract: Hybrid parallelism techniques where a mix of data and model parallelism techniques are used to split the workload of a layer across an array of processors are disclosed. When configuring the array, the bandwidth of the processors in one direction may be greater than the bandwidth in the other direction. Each layer is characterized according to whether they are more feature heavy or weight heavy. Depending on this characterization, the workload of an NN layer can be assigned to the array using a hybrid parallelism technique rather than using solely the data parallelism technique or solely the model parallelism technique. For example, if an NN layer is more weight heavy than feature heavy, data parallelism is used in the direction with the greater bandwidth (to minimize the negative impact of weight reduction) while model parallelism is used in the direction with the smaller bandwidth.

    System-aware selective quantization for performance optimized distributed deep learning

    公开(公告)号:GB2600872A

    公开(公告)日:2022-05-11

    申请号:GB202201906

    申请日:2020-07-17

    Applicant: IBM

    Abstract: A convolutional neural network includes a front layer, a back layer, and a plurality of other layers that are connected between the front layer and the back layer. One of the other layers is a transition layer. A first precision is assigned to activations of neurons from the front layer back to the transition layer and a second precision is assigned to activations of the neurons from the transition layer back to the back layer. A third precision is assigned to weights of inputs to neurons from the front layer back to the transition layer and a fourth precision is assigned to weights of inputs to the neurons from the transition layer back to the back layer. In some embodiments the layers forward of the transition layer have a different convolutional kernel than the layers rearward of the transition layer.

Patent Agency Ranking