Patent classifications
H04L47/326
TOKEN BUCKET WITH ACTIVE QUEUE MANAGEMENT
Systems and methods are provided for a new type of quality of service (QoS) primitive at a network device that has better performance than traditional QoS primitives. The QoS primitive may comprise a token bucket with active queue management (TBAQM). Particularly, the TBAQM may receive a data packet that is processed by the token bucket; adjust tokens associated with the token bucket, where the tokens are added based on a configured rate and subtracted in association with processing the data packet; determine a number of tokens associated with the token bucket, comprising: when the token bucket has zero tokens, initiating a first action with the data packet, and when the token bucket has more than zero tokens, determining a marking probability based on the number of tokens and initiating a second action based on the marking probability.
NETWORK PROCESSOR WITH EXTERNAL MEMORY PROTECTION
Systems and methods for protecting external memory resources to prevent bandwidth collapse in a network processor. One embodiment is a network processor including an input port configured to receive packets from a source device, on-chip memory configured to store packets in queues, and external memory configured to provide a backing store to the on-chip memory. The network processor also includes a processor configured, in response to determining that the source device is unresponsive to a congestion notification, to reduce a size of one or more queues to prevent packets transferring from the on-chip memory to the external memory.
NETWORK PROCESSOR WITH EXTERNAL MEMORY PROTECTION
Systems and methods for protecting external memory resources to prevent bandwidth collapse in a network processor. One embodiment is a network processor including an input port configured to receive packets from a source device, on-chip memory configured to store packets in queues, an external memory interface configured to couple the on-chip memory with an external memory providing a backing store to the on-chip memory, and bandwidth monitor configured to measure a bandwidth utilization of the external memory. The network processor also includes a processor configured to apply the bandwidth utilization of the external memory to a congestion notification profile, to generate one or more congestion notifications based on the bandwidth utilization applied to the congestion notification profile, and to send the one or more congestion notifications to the source device to request decreasing packet rate for decreasing the bandwidth utilization of the external memory.
Determining network device statistics associated with fast counters and slow counters
A network device may receive one or more packets, and may determine a flow control parameter, a rate limiting parameter, and a statistical sampling parameter associated with a slow counter. The network device may determine whether the flow control parameter satisfies a first threshold, whether the rate limiting parameter satisfies a second threshold, and whether the statistical sampling parameter satisfies a third threshold. The network device may identify a counter event associated with one of the one or more packets, and may selectively assign the counter event to a fast counter when at least one of the first threshold, the second threshold, or the third threshold being satisfied, or to the slow counter when none of the first threshold, the second threshold, and the third threshold being satisfied.
Signalling congestion
Congestion in respect to a network element operable to forward data items in a telecommunications networks, and in respect to a processing element operable to process requests for service is signaled. In either, the element is operable to perform its processing function at up to a processing rate which is subject to variation, and has a queue for items awaiting processing having a counter associated therewith which maintains a count from which a queue metric is derivable. A method comprises: updating the count at a rate dependent on the processing rate; further updating the count in response to receipt of items awaiting processing; and signalling a measure of congestion in respect of the element in dependence on the queue metric; then altering the rate at which the count is being updated and adjusting the counter whereby to cause a change in the queue metric if the processing rate has changed.
PROBABILISTIC SERVICE LEVEL AGREEMENTS (SLA)
Regulating transmission of data packets between a first network and a second network over a datalink. Embodiments include determining a first plurality of token bucket rate (TBR) parameters, each TBR parameter corresponding to a one of a first plurality of packet drop precedence (DP) levels and one of a first plurality of timescales (TS). The determination of the first plurality of bucket rate parameters is based on a peak rate requirement, the data link capacity, and a nominal speed requirement associated with the data link. Embodiments also include determining a second plurality of TBR parameters based on the first plurality of TBR parameters and a guaranteed rate requirement, the second plurality comprising a further DP level than the first plurality. Embodiments also include regulating data packets sent between the first network and the second network via the data link based on the second plurality of TBR parameters.
Probabilistic service level agreements (SLA)
Regulating transmission of data packets between a first network and a second network over a datalink. Embodiments include determining a first plurality of token bucket rate (TBR) parameters, each TBR parameter corresponding to a one of a first plurality of packet drop precedence (DP) levels and one of a first plurality of timescales (TS). The determination of the first plurality of bucket rate parameters is based on a peak rate requirement, the data link capacity, and a nominal speed requirement associated with the data link. Embodiments also include determining a second plurality of TBR parameters based on the first plurality of TBR parameters and a guaranteed rate requirement, the second plurality comprising a further DP level than the first plurality. Embodiments also include regulating data packets sent between the first network and the second network via the data link based on the second plurality of TBR parameters.
Packet Processing Method and Apparatus
A packet processing method includes receiving, by a forwarding apparatus, a first packet, where the first packet belongs to a first packet flow, determining, by the forwarding apparatus, at least two types of information in the following four types of information a duration of staying in a first memory by the first packet flow, usage of the first memory, whether the first packet flow is a victim of a congestion control mechanism, and a drop priority of the first packet, and determining, by the forwarding apparatus based on the at least two types of information, whether explicit congestion notification marking needs to be performed on the first packet.
ADAPTIVE BUFFERING IN A DISTRIBUTED SYSTEM WITH LATENCY / ADAPTIVE TAIL DROP
A network device includes a switching system for directing packets between ingress ports and egress ports of the network device. The network device also includes a switching system manager that makes an identification of a state change of a virtual output queue of the switching system; and performs an action set, based on the state change, to modify a latency of the virtual output queue to meet a predetermined latency in response to the identification.
Usage of QUIC spin bit in wireless networks
Various aspects include methods for QUIC packet processing. Various embodiments may include a processor of a computing device determining a round trip time (RTT) for a QUIC flow based at least in part on a spin bit value of a QUIC packet of the QUIC flow, determining a bandwidth-delay (BW-delay) product for the QUIC flow based at least in part on the determined RTT for the QUIC flow, and controlling processing of QUIC packets for the QUIC flow based at least in part on the determined BW-delay product.