Abstract:
In one embodiment, a protocol state associated with a port of a network device is determined to have expired. A port group of which the port is a member is determined, the port group including ports that share one or more common characteristics. A policy is applied to the ports of the port group to determine whether one or more other ports in the port group also have a corresponding protocol state protocol that has expired. In response to one or more other ports in the port group also having a corresponding protocol state that has expired, expiration of the protocol state is determined to be a false positive and no further action is taken based on expiration of the protocol state. When expiration of the protocol state is not determined to be a false positive, further action is taken based on expiration of the protocol state.
Abstract:
Route changes are processed and filtered to notify a client of those routing updates of interest to a client. In one configuration, a set of network addresses are received from a client indicating route updates of interest to the client and a set of types of routing changes that are of interest. One or more data structures are accordingly populated with this information. In response to receiving a route update, one or more lookup operations are performed on the data structure to identify whether this particular route is of interest to a particular client and/or whether any route dependent on the particular route are of interest to a client. The client is notified of the changes of interest. In one embodiment, the type of change to a route is also matched against a set of types of routing changes that are of interest, and a client is only notified if the change to a route of interest also matches a type of routing change of interest.
Abstract:
A memory usage data structure (MUDS) is maintained for each process executing in the computer system, the MUDS having a bitmap field having a bit corresponding to each block of allocatable memory. A bit corresponding to a selected memory block is set to the value of “1” when the selected memory block is allocated to the selected process. The bit corresponding to the selected memory block is set to the value of “0” when the selected memory block is not allocated to the selected process. A master MUDS is generated by combining the MUDS maintained by each process, the master MUDS having bits set to a value of “0” for free memory blocks, and the master MUDS having bits set to a value of “1” for memory blocks allocated to any processes of the multiprocess computer system. In response to the master MUDS, all memory blocks having a corresponding bit set to a value of “0” are returned to free memory. Each process may execute on a different processor in a multiprocess or computer system, for example on interface processors of a router. In a router the memory usage data structure is referred to as the Buffer Usage Data Structure (BUDS). The master BUDS is generated, and any processor not submitting a processor BUDS does not have any bits in the master BUDS set to a value of “1”. Accordingly, any memory previously allocated to a processor which has crashed, or died, is then returned to the global free queue.
Abstract:
Route changes are processed and filtered to notify a client of those routing updates of interest to a client. In one configuration, a set of network addresses are received from a client indicating route updates of interest to the client and a set of types of routing changes that are of interest. One or more data structures are accordingly populated with this information. In response to receiving a route update, one or more lookup operations are performed on the data structure to identify whether this particular route is of interest to a particular client and/or whether any route dependent on the particular route are of interest to a client. The client is notified of the changes of interest. In one embodiment, the type of change to a route is also matched against a set of types of routing changes that are of interest, and a client is only notified if the change to a route of interest also matches a type of routing change of interest.
Abstract:
In one embodiment, a plurality of leaf switches that include host facing ports are configured as a cloud switch. An indication of connectivity between the leaf switches of the cloud switch and routing bridges (RBridges) external to the cloud switch may be added to link state packets (LSPs) sent over the at least one logical shared media link. A lookup table may be generated that specifies next hop leaf switches. The generated lookup table may be used to forward frames to one or more particular nexthop leaf switches. Further, traffic engineering parameters may be collected. Equal cost multipath (ECMP) nexthop leaf switches and distribution trees to reach one or more destinations may be examined. Traffic may be distributed across ones of them based on the traffic engineering parameters.
Abstract:
In one embodiment, a plurality of leaf switches that include host facing ports are configured as a cloud switch. An indication of connectivity between the leaf switches of the cloud switch and routing bridges (RBridges) external to the cloud switch may be added to link state packets (LSPs) sent over the at least one logical shared media link. A lookup table may be generated that specifies next hop leaf switches. The generated lookup table may be used to forward frames to one or more particular nexthop leaf switches. Further, traffic engineering parameters may be collected. Equal cost multipath (ECMP) nexthop leaf switches and distribution trees to reach one or more destinations may be examined. Traffic may be distributed across ones of them based on the traffic engineering parameters.