Patent classifications
H04L47/326
Traffic overload protection of virtual network functions
Examples include a method of determining a first traffic overload protection policy for a first service provided by a first virtual network function in a network of virtual network functions in a computing system and determining a second traffic overload protection policy for a second service provided by a second virtual network function in the network of virtual network functions. The method includes applying the first traffic overload protection policy to the first virtual network function and the second traffic overload protection policy to the second virtual network function, wherein the first traffic overload protection policy and the second traffic overload protection policy are different.
Packet processing method and apparatus
A packet processing method includes receiving, by a forwarding apparatus, a first packet, where the first packet belongs to a first packet flow, determining, by the forwarding apparatus, at least two types of information in the following four types of information a duration of staying in a first memory by the first packet flow, usage of the first memory, whether the first packet flow is a victim of a congestion control mechanism, and a drop priority of the first packet, and determining, by the forwarding apparatus based on the at least two types of information, whether explicit congestion notification marking needs to be performed on the first packet.
SYSTEMS AND METHODS FOR HANDLING DATA CONGESTION FOR SHARED BUFFER SWITCHES WITH DYNAMIC THRESHOLDING
Embodiments of the present invention include systems and methods for adjusting RED configuration according to the available buffer space for a queue in a switch. In one or more embodiments, a method comprises the steps of: initializing minimum and maximum thresholds of RED associated with a queue; determining an available free space for the queue, wherein a data packet for the queue is discarded by a dynamic thresholding when a length of the queue reaches the available free space; determining an allowable free space (AFS) for the queue that is a multiplication of the available free space to an allowance factor (AF); and, when the length of the queue reaches the AFS, calculating a ratio of the minimum threshold to the maximum threshold and updating the maximum threshold to the AFS and updating the minimum threshold to a multiplication of the ratio to the AFS.
Systems and methods for handling data congestion for shared buffer switches with dynamic thresholding
Embodiments of the present invention include systems and methods for adjusting RED configuration according to the available buffer space for a queue in a switch. In one or more embodiments, a method comprises the steps of: initializing minimum and maximum thresholds of RED associated with a queue; determining an available free space for the queue, wherein a data packet for the queue is discarded by a dynamic thresholding when a length of the queue reaches the available free space; determining an allowable free space (AFS) for the queue that is a multiplication of the available free space to an allowance factor (AF); and, when the length of the queue reaches the AFS, calculating a ratio of the minimum threshold to the maximum threshold and updating the maximum threshold to the AFS and updating the minimum threshold to a multiplication of the ratio to the AFS.
Weighted random early detection improvements to absorb microbursts
A packet queueing system includes an ingress port configured to receive packets; queueing logic communicatively coupled to one or more egress queues for transmission via an egress port, wherein the queueing logic is configured to maintain an Acceptable Burst Size (ABS) token bucket which is set to enable absorption of microbursts, and implement a congestion avoidance algorithm to one of randomly drop packets and queue packets, wherein the congestion avoidance algorithm only performs the randomly drop packets responsive to the ABS token bucket being empty.
Packet Processing Method and Apparatus
A packet processing method includes receiving, by a forwarding apparatus, a first packet, where the first packet belongs to a first packet flow, determining, by the forwarding apparatus, at least two types of information in the following four types of information a duration of staying in a first memory by the first packet flow, usage of the first memory, whether the first packet flow is a victim of a congestion control mechanism, and a drop priority of the first packet, and determining, by the forwarding apparatus based on the at least two types of information, whether explicit congestion notification marking needs to be performed on the first packet.
Packet-content based WRED protection
A network element includes multiple ports and logic. The multiple ports are configured to serve as ingress ports and egress ports for receiving and transmitting packets from and to a network. The logic is configured to queue the packets received from the ingress ports, run a packet-dropping process that randomly drops one or more of the queued packets to avoid congestion, while detecting and excluding from the packet-dropping process, at least probabilistically, packets belonging to a predefined packet type, and forward the queued packets, which were not dropped, to the egress ports.
Buffer assignment balancing in a network device
Techniques for improved handling of queues of data units are described, such as queues of buffered data units of differing types and/or sources within a switch or other network device. When the size of a queue surpasses the state entry threshold for a certain state, the queue is said to be in the certain state. While in the certain state, data units assigned to the queue may be handled differently in some respect, such as being marked or being dropped without further processing. The queue remains in this certain state until its size falls below the state release threshold for the state. The state release threshold is adjusted over time in, for example, a random or pseudo-random manner. Among other aspects, in some embodiments, this adjustment of the state release threshold addresses fairness issues that may arise with respect to the treatment of different types or sources of data units.
Congestion control measures in multi-host network adapter
A network adapter includes a host interface, a network interface, a memory and packet processing circuitry. The memory holds a shared buffer and multiple queues allocated to the multiple host processors. The packet processing circuitry is configured to receive from the network interface data packets destined to the host processors, to store payloads of at least some of the data packets in the shared buffer, to distribute headers of at least some of the data packets to the queues, to serve the data packets to the host processors by applying scheduling among the queues, to detect congestion in the data packets destined to a given host processor among the host processors, and, in response to the detected congestion, to mitigate the congestion in the data packets destined to the given host processor, while retaining uninterrupted processing of the data packets destined to the other host processors.
Network processor with external memory protection
Systems and methods for protecting external memory resources to prevent bandwidth collapse in a network processor. One embodiment is a network processor including an input port configured to receive packets from a source device, on-chip memory configured to store packets in queues, and external memory configured to provide a backing store to the on-chip memory. The network processor also includes a processor configured, in response to determining that the source device is unresponsive to a congestion notification, to reduce a size of one or more queues to prevent packets transferring from the on-chip memory to the external memory.