-
公开(公告)号:US10833995B2
公开(公告)日:2020-11-10
申请号:US16209865
申请日:2018-12-04
Inventor: Jianxin Liao , Qi Qi , Jing Wang , Jingyu Wang , Jiannan Bao
IPC: H04L12/801 , H04L29/08 , H04L29/06
Abstract: Embodiments of the present invention provide a congestion control method and apparatus based on a software defined network SDN, and an SDN controller. The method comprises: obtaining a packet_in message sent by a switch; determining a data packet included in the packet_in message; performing a first congestion control processing for a network where the SDN controller is located based on a topological structure and link information of the network when the data packet is a handshake information SYN packet for requesting to establish a TCP connection; performing a second congestion control processing for the network based on the link information when the data packet is a finish information FIN packet for responding to disconnection of a TCP connection; deleting information of a TCP connection stored in a database and corresponding to the data packet when the data packet is an FIN packet requesting to disconnect a TCP connection. As compared with the prior art, by using the solutions according to the embodiments of the present invention, the SDN controller may improve fairness of the bandwidth between each data flow, and reduce TCP retransmission and timeout caused by the highly burst short traffic, and achieve the control of the TCP Incast congestion existing in the datacenter.
-
公开(公告)号:US11514309B2
公开(公告)日:2022-11-29
申请号:US16215033
申请日:2018-12-10
Inventor: Jianxin Liao , Jingyu Wang , Jing Wang , Qi Qi , Jie Xu
Abstract: Embodiments of the present invention provide a method and apparatus for accelerating distributed training of a deep neural network. The method comprises: based on parallel training, the training of deep neural network is designed as a distributed training mode. A deep neural network to be trained is divided into multiple sub-networks. A set of training samples is divided into multiple subsets of samples. The training of the deep neural network to be trained is performed with the multiple subsets of samples based on a distributed cluster architecture and a preset scheduling method. The multiple sub-networks are simultaneously trained so as to fulfill the distributed training of the deep neural network. The utilization of the distributed cluster architecture and the preset scheduling method may reduce, through data localization, the effect of network delay on the sub-networks under distributed training; adapt the training strategy in real time; and synchronize the sub-networks trained in parallel. As such, the time required for the distributed training of the deep neural network may be reduced and the training efficiency of the deep neural network may be improved.
-
公开(公告)号:US11411865B2
公开(公告)日:2022-08-09
申请号:US16906867
申请日:2020-06-19
Inventor: Jing Wang , Jingyu Wang , Haifeng Sun , Qi Qi , Bo He , Jianxin Liao
IPC: H04L45/00 , H04L45/302
Abstract: A network resource scheduling method, apparatus, an electronic device and a storage medium are disclosed. An embodiment of the method includes: upon receipt of a network data stream, determining a traffic type of the network data stream based on the number of data packets of the network data stream received within a specified period of time, lengths of the data packets and reception times of the data packets; for each data packet comprised in the network data stream, determining a target transmission path for the data packet, based on node state parameters of nodes in the network cluster, link state parameters of links in the network cluster, and the traffic type of the network data stream when the data packet is received; and transmitting the data packet via the target transmission path.
-
4.
公开(公告)号:US11886993B2
公开(公告)日:2024-01-30
申请号:US17015269
申请日:2020-09-09
Inventor: Qi Qi , Haifeng Sun , Jing Wang , Lingxin Zhang , Jingyu Wang , Jianxin Liao
Abstract: Disclosed are a method and apparatus for task scheduling based on deep reinforcement learning and a device. The method comprises: obtaining multiple target subtasks to be scheduled; building target state data corresponding to the multiple target subtasks, wherein the target state data comprises a first set, a second set, a third set, and a fourth set; inputting the target state data into a pre-trained task scheduling model, to obtain a scheduling result of each target subtask; wherein, the scheduling result of each target subtask comprises a probability that the target subtask is scheduled to each target node; for each target subtask, determining a target node to which the target subtask is to be scheduled based on the scheduling result of the target subtask, and scheduling the target subtask to the determined target node.
-
-
-