H04L47/52

Service Level Adjustment Method and Apparatus, Device, and Storage Medium
20220377016 · 2022-11-24 ·

A service level adjustment method includes obtaining by a first network device at least one piece of queue status information at a target service level of the first network device; when any one of the at least one piece of queue status information exceeds a first threshold upper limit corresponding to the any queue status information, adjusting by the first network device a parameter of the target service level based on a maximum delay associated with the target service level; when the queue status information at the target service level exceeds the corresponding threshold upper limit, adjusting by the first network device the parameter of the target service level based on the associated maximum delay.

Systems and methods for providing lockless bimodal queues for selective packet capture

In a network system, an application receiving packets can consume one or more packets in two or more stages, where the second and the later stages can selectively consume some but not all of the packets consumed by the preceding stage. Packets are transferred between two consecutive stages, called producer and consumer, via a fixed-size storage. Both the producer and the consumer can access the storage without locking it and, to facilitate selective consumption of the packets by the consumer, the consumer can transition between awake and sleep modes, where the packets are consumed in the awake mode only. The producer may also switch between awake and sleep modes. Lockless access is made possible by controlling the operation of the storage by the producer and the consumer both according to the mode of the consumer, which is communicated via a shared memory location.

Early credit return for credit-based flow control
11588745 · 2023-02-21 · ·

A device allocates buffer space for storing data received from another device. The other device has a credit balance corresponding to the amount of buffer space. A sending device reduces its number of credits by a cost of a packet and sends the packet. To ensure that the buffer does not overflow, the sending device spends a credit for each entry in the buffer that could be consumed by the sent data packet. When received data is added to the buffer without consuming a new entry, a response packet that returns a credit is sent to the sending device before the data is read from the buffer. Thus, the sending device is enabled to continue sending data without waiting for the buffer to be read, enabling the communication between the two devices to make more efficient use of the buffer.

Reorder resilient transport

Devices and techniques for reorder resilient transport are described herein. A device may store data packets in sequential positions of a flow queue in an order in which the data packets were received. The device may retrieve a first data packet from a first sequential position and a second data packet from a second sequential position that is next in sequence to the first sequential position in the flow queue. The device may store the first data packet and the second data packet in a buffer and refrain from providing the first data packet and the second data packet to upper layer circuitry if the packet order information for the first data packet and the second data packet indicate that the first data packet and the second data packet were received out of order. Other embodiments are also described.

Reorder resilient transport

Devices and techniques for reorder resilient transport are described herein. A device may store data packets in sequential positions of a flow queue in an order in which the data packets were received. The device may retrieve a first data packet from a first sequential position and a second data packet from a second sequential position that is next in sequence to the first sequential position in the flow queue. The device may store the first data packet and the second data packet in a buffer and refrain from providing the first data packet and the second data packet to upper layer circuitry if the packet order information for the first data packet and the second data packet indicate that the first data packet and the second data packet were received out of order. Other embodiments are also described.

METHOD AND SYSTEM FOR FACILITATING LOSSY DROPPING AND ECN MARKING
20230046350 · 2023-02-16 ·

Methods and systems are provided for performing lossy dropping and ECN marking in a flow-based network. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow are acknowledged after reaching the egress point of the network, and the acknowledgement packets are sent back to the ingress point of the flow along the same data path. As a result, each switch can obtain state information of each flow and perform per-flow packet dropping and ECN marking.

COMMUNICATION EQUIPMENT, COMMUNICATION METHODS AND PROGRAMS

An object is to provide a communication apparatus, a communication method, and a program capable of avoiding an increase in network load when input traffic continues to be large and a communication delay when input traffic is very small. A communication apparatus according to the present invention prepares three token buckets and can transfer, discard, or hold a packet in accordance with the amount of tokens in each token bucket. This enables the communication apparatus to operate so as not to exceed a set maximum bandwidth when large traffic is received for the delay guarantee shaping. Further, When the maximum bandwidth is exceeded, the communication apparatus can select whether to discard a packet to prioritize a delay guarantee or to hold a packet to prioritize no loss of packets. Furthermore, the communication apparatus can immediately transmit a packet without increasing a communication delay when input traffic is very small.

Packet Buffer Spill-Over in Network Devices

A packet processor of a network device receives packets ingressing from a plurality of network links via a plurality of network ports of the network device. The packet processor buffers the packets in an internal packet memory in a plurality of queues, including a first queue. In response to the packet processor detecting congestion in the internal packet memory, the packet processor selectively forwards a group of multiple packets in the first queue from the internal packet memory to a first port, among one or more ports coupled to one or more external memories, to transfer the group of multiple packets to a first external memory that is coupled to the first port so that the first queue is stored across the internal packet memory and the first external packet memory.

SYSTEM AND METHOD FOR PRIORITIZING NETWORK TRAFFIC IN A DISTRIBUTED ENVIRONMENT
20230040411 · 2023-02-09 ·

A system and method for prioritizing network traffic in a distributed environment. The system includes: a plurality of logic modules configured to receive policy data from a network device; a control processor associated with each logic module, each control processor configured to determine data associated with a traffic flow and coordinate traffic actions over the plurality of logic modules; a packet processor associated with each control processor and configured to determine a traffic action based on the traffic flow and received policy data; and at least one shaper object configured to enforce the determined traffic action. The method includes: receiving policy data from a network device; determining data associated with a traffic flow at logic modules to coordinate traffic actions of the logic modules; determining a traffic action based on the traffic flow and received policy data; and enforcing the traffic action across at least one shaper object.

SYSTEMS AND METHODS FOR QUEUE CONTROL BASED ON CLIENT-SPECIFIC PROTOCOLS
20230036796 · 2023-02-02 ·

The present disclosure generally relates to controlling access to resources by selectively processing requests stored in a task queue to prioritize certain requests over others, thereby preventing automated scripts from accessing the resources. More specifically, the present disclosure relates to a normalization and prioritization system for controlling access to resources by queuing resource requests based on a client-defined normalization process that uses one or more data sources.