METHOD FOR MULTI-POLICY CONFLICT AVOIDANCE IN AUTONOMOUS NETWORK

    公开(公告)号:US20230412457A1

    公开(公告)日:2023-12-21

    申请号:US18115453

    申请日:2023-02-28

    CPC classification number: H04L41/0894 H04L41/0886

    Abstract: A method for multi-policy conflict avoidance in an autonomous network comprising: collecting network state information; acquiring a set of multiple policies to be verified; constructing a policy ordering space tree containing all multi-policy execution sequences; performing a depth-first traversal on the policy ordering space tree, extracting a multi-policy execution sequence to be verified, then constructing an initial simulation data plane, injecting each policy in the multi-policy execution sequence into the simulation data plane one by one in sequence, and storing a simulation data plane after each policy is inserted; detecting whether there is a conflict in the simulation data plane generated after each policy is executed, inferring dependencies among multiple policies in the conflict policy sequence, pruning the policy ordering space tree, to efficiently select and update the multi-policy execution sequence avoiding conflicts.

    METHOD AND APPARATUS FOR TASK SCHEDULING BASED ON DEEP REINFORCEMENT LEARNING, AND DEVICE

    公开(公告)号:US20210081787A1

    公开(公告)日:2021-03-18

    申请号:US17015269

    申请日:2020-09-09

    Abstract: Disclosed are a method and apparatus for task scheduling based on deep reinforcement learning and a device. The method comprises: obtaining multiple target subtasks to be scheduled; building target state data corresponding to the multiple target subtasks, wherein the target state data comprises a first set, a second set, a third set, and a fourth set; inputting the target state data into a pre-trained task scheduling model, to obtain a scheduling result of each target subtask; wherein, the scheduling result of each target subtask comprises a probability that the target subtask is scheduled to each target node; for each target subtask, determining a target node to which the target subtask is to be scheduled based on the scheduling result of the target subtask, and scheduling the target subtask to the determined target node.

    Method And Apparatus For Datacenter Congestion Control Based On Software Defined Network

    公开(公告)号:US20190394129A1

    公开(公告)日:2019-12-26

    申请号:US16209865

    申请日:2018-12-04

    Abstract: Embodiments of the present invention provide a congestion control method and apparatus based on a software defined network SDN, and an SDN controller. The method comprises: obtaining a packet_in message sent by a switch; determining a data packet included in the packet_in message; performing a first congestion control processing for a network where the SDN controller is located based on a topological structure and link information of the network when the data packet is a handshake information SYN packet for requesting to establish a TCP connection; performing a second congestion control processing for the network based on the link information when the data packet is a finish information FIN packet for responding to disconnection of a TCP connection; deleting information of a TCP connection stored in a database and corresponding to the data packet when the data packet is an FIN packet requesting to disconnect a TCP connection. As compared with the prior art, by using the solutions according to the embodiments of the present invention, the SDN controller may improve fairness of the bandwidth between each data flow, and reduce TCP retransmission and timeout caused by the highly burst short traffic, and achieve the control of the TCP Incast congestion existing in the datacenter

    METHOD AND APPARATUS FOR ACCELERATING DISTRIBUTED TRAINING OF A DEEP NEURAL NETWORK

    公开(公告)号:US20190392307A1

    公开(公告)日:2019-12-26

    申请号:US16215033

    申请日:2018-12-10

    Abstract: Embodiments of the present invention provide a method and apparatus for accelerating distributed training of a deep neural network. The method comprises: based on parallel training, the training of deep neural network is designed as a distributed training mode. A deep neural network to be trained is divided into multiple sub-networks. A set of training samples is divided into multiple subsets of samples. The training of the deep neural network to be trained is performed with the multiple subsets of samples based on a distributed cluster architecture and a preset scheduling method. The multiple sub-networks are simultaneously trained so as to fulfill the distributed training of the deep neural network. The utilization of the distributed cluster architecture and the preset scheduling method may reduce, through data localization, the effect of network delay on the sub-networks under distributed training; adapt the training strategy in real time; and synchronize the sub-networks trained in parallel. As such, the time required for the distributed training of the deep neural network may be reduced and the training efficiency of the deep neural network may be improved.

Patent Agency Ranking