-
公开(公告)号:US11798120B2
公开(公告)日:2023-10-24
申请号:US17398295
申请日:2021-08-10
Applicant: Intel Corporation
Inventor: Dhiraj D. Kalamkar , Karthikeyan Vaidyanathan , Srinivas Sridharan , Dipankar Das
Abstract: One embodiment provides for a method of transmitting data between multiple compute nodes of a distributed compute system, the method comprising creating a global view of communication operations to be performed between the multiple compute nodes of the distributed compute system, the global view created using information specific to a machine learning model associated with the distributed compute system; using the global view to determine a communication cost of the communication operations; and automatically determining a number of network endpoints for use in transmitting the data between the multiple compute nodes of the distributed compute system.
-
公开(公告)号:US11094029B2
公开(公告)日:2021-08-17
申请号:US15482953
申请日:2017-04-10
Applicant: Intel Corporation
Inventor: Dhiraj D. Kalamkar , Karthikeyan Vaidyanathan , Srinivas Sridharan , Dipankar Das
Abstract: One embodiment provides for a method of transmitting data between multiple compute nodes of a distributed compute system, the method comprising creating a global view of communication operations to be performed between the multiple compute nodes of the distributed compute system, the global view created using information specific to a machine learning model associated with the distributed compute system; using the global view to determine a communication cost of the communication operations; and automatically determining a number of network endpoints for use in transmitting the data between the multiple compute nodes of the distributed compute system.
-
公开(公告)号:US20190205745A1
公开(公告)日:2019-07-04
申请号:US15859180
申请日:2017-12-29
Applicant: Intel Corporation
Inventor: Srinivas Sridharan , Karthikeyan Vaidyanathan , Dipankar Das , Chandrasekaran Sakthivel , Mikhail E. Smorkalov
CPC classification number: G06F9/5061 , G06F9/5077
Abstract: Embodiments described herein provide a system to configure distributed training of a neural network, the system comprising memory to store a library to facilitate data transmission during distributed training of the neural network; a network interface to enable transmission and receipt of configuration data associated with a set of worker nodes, the worker nodes configured to perform distributed training of the neural network; and a processor to execute instructions provided by the library, the instructions to cause the processor to create one or more groups of the worker nodes, the one or more groups of worker nodes to be created based on a communication pattern for messages to be transmitted between the worker nodes during distributed training of the neural network.
-
公开(公告)号:US20230376762A1
公开(公告)日:2023-11-23
申请号:US18320385
申请日:2023-05-19
Applicant: Intel Corporation
Inventor: Srinivas Sridharan , Karthikeyan Vaidyanathan , Dipankar Das , Chandrasekaran Sakthivel , Mikhail E. Smorkalov
CPC classification number: G06N3/08 , G06N3/088 , G06F9/5061 , G06F9/50 , G06F9/5077 , G06N3/084 , G06N3/044 , G06N3/045 , G06N3/04 , G06N3/063 , G06N3/048
Abstract: Embodiments described herein provide an apparatus comprising an interconnect switch configured to couple with a plurality of graphics processors via a plurality of point-to-point interconnects and one or more processors including a graphics processor coupled with the interconnect switch via a point-to-point interconnect of the plurality of point-to-point interconnects.
-
公开(公告)号:US11270201B2
公开(公告)日:2022-03-08
申请号:US15859180
申请日:2017-12-29
Applicant: Intel Corporation
Inventor: Srinivas Sridharan , Karthikeyan Vaidyanathan , Dipankar Das , Chandrasekaran Sakthivel , Mikhail E. Smorkalov
Abstract: Embodiments described herein provide a system to configure distributed training of a neural network, the system comprising memory to store a library to facilitate data transmission during distributed training of the neural network; a network interface to enable transmission and receipt of configuration data associated with a set of worker nodes, the worker nodes configured to perform distributed training of the neural network; and a processor to execute instructions provided by the library, the instructions to cause the processor to create one or more groups of the worker nodes, the one or more groups of worker nodes to be created based on a communication pattern for messages to be transmitted between the worker nodes during distributed training of the neural network.
-
公开(公告)号:US20180322387A1
公开(公告)日:2018-11-08
申请号:US15869510
申请日:2018-01-12
Applicant: Intel Corporation
Inventor: Srinivas Sridharan , Karthikeyan Vaidyanathan , Dipankar Das
Abstract: One embodiment provides for a system to compute and distribute data for distributed training of a neural network, the system including first memory to store a first set of instructions including a machine learning framework; a fabric interface to enable transmission and receipt of data associated with the set of trainable machine learning parameters; a first set of general-purpose processor cores to execute the first set of instructions, the first set of instructions to provide a training workflow for computation of gradients for the trainable machine learning parameters and to communicate with a second set of instructions, the second set of instructions facilitate transmission and receipt of the gradients via the fabric interface; and a graphics processor to perform compute operations associated with the training workflow to generate the gradients for the trainable machine learning parameters.
-
公开(公告)号:US20220245454A1
公开(公告)日:2022-08-04
申请号:US17685462
申请日:2022-03-03
Applicant: Intel Corporation
Inventor: Srinivas Sridharan , Karthikeyan Vaidyanathan , Dipankar Das , Chandrasekaran Sakthivel , Mikhail E. Smorkalov
Abstract: Embodiments described herein provide a system to configure distributed training of a neural network, the system comprising memory to store a library to facilitate data transmission during distributed training of the neural network; a network interface to enable transmission and receipt of configuration data associated with a set of worker nodes, the worker nodes configured to perform distributed training of the neural network; and a processor to execute instructions provided by the library. The instructions cause the processor to create one or more groups of the worker nodes, the one or more groups of worker nodes to be created based on a communication pattern for messages to be transmitted between the worker nodes during distributed training of the neural network. The processor can transparently adjust communication paths between worker nodes based on the communication pattern.
-
公开(公告)号:US11023803B2
公开(公告)日:2021-06-01
申请号:US15482925
申请日:2017-04-10
Applicant: Intel Corporation
Inventor: Dhiraj D. Kalamkar , Karthikeyan Vaidyanathan , Srinivas Sridharan , Dipankar Das
Abstract: One embodiment provides for a non-transitory machine readable medium storing instructions which, when executed by one or more processors, cause the one or more processors to perform operations comprising providing an interface to define a neural network using machine-learning domain specific terminology, wherein the interface enables selection of a neural network topology and abstracts low-level communication details of distributed training of the neural network.
-
公开(公告)号:US20210109888A1
公开(公告)日:2021-04-15
申请号:US16642483
申请日:2017-09-30
Applicant: Intel Corporation
Inventor: Karthikeyan Vaidyanathan , Srinivas Sridharan , Dipankar Das
IPC: G06F15/163
Abstract: A technique includes performing a collective operation among multiple nodes of a parallel processing computer system using multiple parallel processing stages. The technique includes regulating an ordering of the parallel processing stages so that an initial stage of the plurality of parallel processing stages is associated with a higher node injection bandwidth than a subsequent stage of the plurality of parallel processing stages.
-
公开(公告)号:US12211117B2
公开(公告)日:2025-01-28
申请号:US17849968
申请日:2022-06-27
Applicant: Intel Corporation
Inventor: Dipankar Das , Karthikeyan Vaidyanathan , Srinivas Sridharan
Abstract: One embodiment provides for a method of transmitting data between multiple compute nodes of a distributed compute system, the method comprising multi-dimensionally partitioning data of a feature map across multiple nodes for distributed training of a convolutional neural network; performing a parallel convolution operation on the multiple partitions to train weight data of the neural network; and exchanging data between nodes to enable computation of halo regions, the halo regions having dependencies on data processed by a different node.
-
-
-
-
-
-
-
-
-