H04L47/627

PROGRAMMING HIERARCHICAL SCHEDULERS FOR PORTS OF NETWORK DEVICES BASED ON HUMAN-READABLE CONFIGURATIONS

Embodiments of the present disclosure include techniques for programming hierarchical schedulers for ports of network devices. A configuration for configuring a hierarchy of a plurality of scheduling nodes of a packet scheduler is received. The packet scheduler is configured to schedule packets for egress out of a port of the network device. The configuration is specified in a human-readable format. Based on the configuration, the packet scheduler of the port is programmed. A plurality of packets are received at a plurality of physical queues communicatively coupled to the packet scheduler. The packet scheduler is used to select a packet in the plurality of packets from a physical queue in the plurality of physical queues. The selected packet is forwarded out the port of the network device.

PATH DETERMINATION METHOD, NETWORK ELEMENT, AND COMPUTER-READABLE STORAGE MEDIUM
20230379266 · 2023-11-23 · ·

A path determination method, a network element, and a computer-readable storage medium are disclosed. The path determination method may include: acquiring link bandwidth data of all physical links within a network slice, where the link bandwidth data corresponds to the network slice, and the network slice is created by an interior gateway protocol flex algorithm (S100); acquiring a required bandwidth of a transmission channel within the network slice (S200); and determining, according to the link bandwidth data and the required bandwidth, a path for the transmission channel from among all the physical links (S300).

REGULATING ENQUEUEING AND DEQUEUING BORDER GATEWAY PROTOCOL (BGP) UPDATE MESSAGES

A network device, associated with peer network devices, may receive policy information for a protocol; and compute a first update message based on information regarding a route associated with the policy information. The network device may determine that an upper utilization threshold for one or more of peer queues, associated with the peer network devices, is not satisfied; and write the first update message to the peer queues based on determining that the upper utilization threshold is not satisfied. The network device may compute a second update message based on the information regarding the route; determine that the upper utilization threshold for one or more of the peer queues is satisfied; and pause writing the second update message to the peer queues based on the upper utilization threshold being satisfied. The network device may permit the peer network devices to obtain data from corresponding ones of the peer queues.

Intent-based networking using network change validation
11539592 · 2022-12-27 · ·

Various example embodiments for supporting intent-based networking within a communication network are presented herein. Various example embodiments for supporting intent-based networking within a communication network may be configured to support a management system configured to support management of a network based on use of change management for management of the network. Various example embodiments for supporting intent-based networking within a communication network may be configured to support a management system configured to support management of a network based on use of change management for management of the network where the change management is based on validation of changes in the network before the changes are permanently effected in the network (e.g., based on application of the changes in the network for validation of the changes, based on application of the changes in a network that mirrors the real network before applying the changes to the real network, or the like).

Packet Processing Method and Apparatus, Communications Device, and Switching Circuit
20220329544 · 2022-10-13 ·

A packet processing method includes: a first device receives a packet from a second device; the first device determines a first queue buffer used to store the packet, and determines a first upper limit value of the first queue buffer based on an available value of a first port buffer and an available value of a global buffer, where the global buffer includes at least one port buffer, the first port buffer is one of the at least one port buffer, the first port buffer includes at least one queue buffer, and the first queue buffer is one of the at least one queue buffer. The first device processes the packet based on the first upper limit value of the first queue buffer, an occupation value of the first queue buffer, and a size of the packet.

On-demand packet queuing in a network device

Examples herein relate to allocation of an intermediate queue to a flow or traffic class (or other allocation) of packets prior to transmission to a network. Various types of intermediate queues are available for selection. An intermediate queue can be shallow and have an associated throughput that attempts to meet or exceed latency guarantees for a packet flow or traffic class. Another intermediate queue is larger in size and expandable and can be used for packets that are sensitive to egress port incast such as latency sensitive packets. Yet another intermediate queue is expandable but provides no guarantee on maximum end-to-end latency and can be used for packets where dropping is to be avoided. Intermediate queues can be deallocated after a flow or traffic class ends and related memory space can be used for another intermediate queue.

Packet processing method and apparatus, communications device, and switching circuit

A packet processing method includes: a first device receives a packet from a second device; the first device determines a first queue buffer used to store the packet, and determines a first upper limit value of the first queue buffer based on an available value of a first port buffer and an available value of a global buffer, where the global buffer includes at least one port buffer, the first port buffer is one of the at least one port buffer, the first port buffer includes at least one queue buffer, and the first queue buffer is one of the at least one queue buffer. The first device processes the packet based on the first upper limit value of the first queue buffer, an occupation value of the first queue buffer, and a size of the packet.

Per service microburst monitoring systems and methods for ethernet
11388075 · 2022-07-12 · ·

Systems and methods in a node in an Ethernet network include, responsive to enabling burst monitoring between the node and a peer node in the Ethernet network, obtaining rate and burst size information from the peer node; configuring a counter at a traffic disaggregation point based on the rate and the burst size information, wherein the counter is based on a dual token bucket that is used to count out-of-profile frames in excess of a Committed Information Rate (CIR); and detecting a burst based on the out-of-profile frames during a monitored time interval.

Packet order recovery in a programmable edge switch in a data center network
11451494 · 2022-09-20 · ·

Systems and methods include receiving incoming packets associated with flows in a data center network where the flows are forwarded on a per-packet basis; maintaining a state of each of the flows and of received incoming packets; and dequeuing the received incoming packets based on one or more packet dequeue conditions and the state. The edge switch can be one of a Top of Rack switch and a Network Interface Card (NIC) communicatively coupled to a corresponding server. The received incoming packets can utilize a transport protocol including any of Transmission Control Protocol (TCP), Xpress Transport Protocol (XTP), and Stream Control Transmission Protocol (SCTP).

METHOD AND SYSTEM FOR TRAFFIC SCHEDULING
20220094633 · 2022-03-24 ·

A method and system for traffic scheduling are provided, wherein the method includes: preconfiguring policy routing in a router of a target node server; counting a current access traffic of each of a plurality of ports, and generating a traffic scheduling instruction based on the counted access traffic and the policy routing; and sending the traffic scheduling instruction to the target node server.