H04L47/6255

Shared memory mesh for switching

Examples are described herein that relate to a mesh in a switch fabric. The mesh can include one or more buses that permit operations (e.g., read, write, or responses) to continue in the same direction, drop off to a memory, drop off a bus to permit another operation to use the bus, or receive operations that are changing direction. A latency estimate can be determined at least for operations that drop off from a bus to permit another operation to use the bus or receive and channel operations that are changing direction. An operation with a highest latency estimate (e.g., time of traversing a mesh) can be permitted to use the bus, even causing another operation, that is not to change direction, to drop off the bus and re-enter later.

DEVICE AND METHOD FOR QUEUES RELEASE AND OPTIMIZATION BASED ON RUN-TIME ADAPTIVE AND DYNAMIC INTERNAL PRIORITY VALUE STRATEGY

The present disclosure relates to controlling queue release in a network. In particular, the disclosure proposes a controller configured to obtain a state of each of a plurality of queues of a network node and determine, based on the states of the queues, whether the utilization of one or more queues exceeds one or more thresholds. If one or more thresholds are exceeded, the controller is configured to generate one or more new priority entries for one or more queues of the plurality of queues and provide the one or more new priority entries to the one or more queues of the network node. Further, the disclosure proposes a network node being configured to provide a state of each of a plurality of queues to a controller, and obtain one or more new priority entries for one or more queues of the plurality of queues from the controller.

DEVICE AND METHOD FOR QUEUES RELEASE AND OPTIMIZATION BASED ON RUN-TIME ADAPTIVE AND DYNAMIC GATE CONTROL LIST STRATEGY

A controller is configured to: obtain a state of each of a plurality of queues of a network node; determine, based on the states of the queues, whether the utilization of one or more queues exceeds one or more thresholds; generate one or more new entries for a gate control list of the network node that controls the plurality of queues, if one or more thresholds are exceeded; and provide the one or more new entries to the network node. Further, a network node is configured to provide a state of each of a plurality of queues to a controller, and obtain one or more new entries for a gate control list of the network node that controls the plurality of queues, from the controller.

QUEUE MANAGEMENT IN A FORWARDER
20230208778 · 2023-06-29 ·

A queue management method, system, and recording medium include Selective Acknowledgments (SACK) examining to examine SACK blocks of the forwarder to selectively drop packets in the forward flow queue based on a reverse flow queue and MultiPath Transmission Control Protocol (MPTCP) examining configured to examine multipath headers to recognize MPTCP flows and examine the reverse flow queue to determine if redundant data has been sent such that the dropping drops the redundant data.

Allocation of processors for processing packets

Examples described herein identify a flow that is considered heavy or high in transmit or receive rate. A filter rule can be assigned to the flow such that packets of the heavy flow are allocated to a queue and core for processing. Various queues and cores can be dedicated to processing received or transmitted packets of heavy flows and various queues and cores can be dedicated to process received or transmitted packets of non-heavy flows. An application acceleration layer can be used to migrate an application to a core that is to process received or transmitted packets of a heavy flow.

TOKEN BUCKET WITH ACTIVE QUEUE MANAGEMENT
20230198910 · 2023-06-22 ·

Systems and methods are provided for a new type of quality of service (QoS) primitive at a network device that has better performance than traditional QoS primitives. The QoS primitive may comprise a token bucket with active queue management (TBAQM). Particularly, the TBAQM may receive a data packet that is processed by the token bucket; adjust tokens associated with the token bucket, where the tokens are added based on a configured rate and subtracted in association with processing the data packet; determine a number of tokens associated with the token bucket, comprising: when the token bucket has zero tokens, initiating a first action with the data packet, and when the token bucket has more than zero tokens, determining a marking probability based on the number of tokens and initiating a second action based on the marking probability.

Flowlet scheduler for multicore network processors
11683119 · 2023-06-20 · ·

Systems and methods of using a packet order work (POW) scheduler to assign packets to a set of scheduler queues for supplying packets to parallel processing units. A processing unit and the associated scheduler queue is dedicated to a specific flow until a queue-reallocation event, which may correspond to the associated scheduler queue being idle for at least a certain interval as indicated by its age counter, or the queue being the least recently used, when a new flow arrives. In this case, the scheduler queue and the associated processing unit may be reallocated to the new flow and disassociated with the previous flow. As a result, dynamic packet workload balancing can be advantageously achieved across the multiple processing paths.

Improving the architecture of middleboxes or service routers to better consolidate diverse functions
09838308 · 2017-12-05 · ·

An apparatus comprising at least one receiver configured to receive a traffic flow, receive information comprising a set of functions and an order of the set from a controller, and a processor coupled to the at least one receiver and configured to assign the traffic flow to one or more resources, determine a processing schedule for the traffic flow, and process the traffic flow by the set of functions, following the order of the set, using the one or more resources, and according to the processing schedule.

Shared traffic manager

A traffic manager is shared amongst two or more egress blocks of a network device, thereby allowing traffic management resources to be shared between the egress blocks. Schedulers within a traffic manager may generate and queue read instructions for reading buffered portions of data units that are ready to be sent to the egress blocks. The traffic manager may be configured to select a read instruction for a given buffer bank from the read instruction queues based on a scoring mechanism or other selection logic. To avoid sending too much data to an egress block during a given time slot, once a data unit portion has been read from the buffer, it may be temporarily stored in a shallow read data cache. Alternatively, a single, non-bank specific controller may determine all of the read instructions and write operations that should be executed in a given time slot.

DESTINATION SELECTION FOR AN OFFLINE CHARGING SYSTEM TO AVOID REVERSION
20170339544 · 2017-11-23 ·

Systems, methods, and software that distribute accounting requests to a plurality of Charging Data Functions (CDFs). One embodiment comprises a distributor unit that connects to the CDFs, which register their queues with the distributor unit. When receiving an accounting request (e.g., interim or stop) for a session, the distributor unit extracts an identifier from the accounting request indicating a destination queue previously selected for the session. When the destination queue is not accepting new sessions or ongoing sessions, the distributor unit identifies a prioritized list for the session, identifies a position of the destination queue in the prioritized list, searches the prioritized list for an alternate queue having a lower priority than the destination queue, and having a status indicating that the alternate queue is accepting new sessions. The distributor unit then selects the alternate queue as an alternate destination queue for the session.