H04L47/623

Scheduling solution configuration method and apparatus, computer readable storage medium thereof, and computer device

A scheduling scheme configuration method includes performing state verification on a plurality of operation dimensions involved in generating a scheduling scheme, and, in response to one or more of the operation dimensions being abnormal, removing the one or more abnormal operation dimensions to generate a new scheduling scheme.

Fair arbitration between multiple sources targeting a destination

A hardware module comprises at least a first ingress buffer and a second ingress buffer, where the second ingress buffer holds data packets from a plurality of source components. To ensure fairness between one or more sources providing data to the first ingress buffer and the plurality of sources providing data to the second ingress buffer, processing circuitry examines source identifiers in packets held in the second ingress buffer and selects between the buffers so as to arbitrate between the sources. In some embodiments, the examination of the source identifiers provides statistics for a weighted round robin between the ingress buffers. In other embodiments, the source identifier of whichever packet is currently at the head of the second ingress buffer is used to perform a simple round robin between the sources.

Method and module of priority division and queue scheduling for communication services in smart substation

A method for dividing communication services in smart substation into different priorities, the method including: determining the priority of a message to be sent according to the service type and its priority definition; the communication services includes trip message, state change message, sampled value message, device status message, time synchronization message, and file transfer message; the corresponding priority is respectively defined as 7, 6, 5, 4, 3, 1; and filling the user priority field of IEEE802.1Q label in a message header with a binary value corresponding to its priority.

MEMORY-EFFICIENT TECHNIQUE FOR WEIGHTED ROUND-ROBIN LOAD BALANCING
20220360532 · 2022-11-10 ·

A memory-efficient technique for performing weighted round-robin load balancing in a distributed computing system is described. In one example of the present disclosure, a system can determine an offset to apply to a list of node identifiers based on a counter value. The system can select a subset of node identifiers from the list of node identifiers based on the offset. The system can then select a node identifier from the subset of node identifiers based on the counter value and a length of the subset of node identifiers. The system can transmit data to a node that corresponds to the node identifier and increment the counter value. The system can repeat this process any number of times to distribute data among a group of nodes in the distributed computing system.

DETERMINING RATE DIFFERENTIAL WEIGHTED FAIR OUTPUT QUEUE SCHEDULING FOR A NETWORK DEVICE

A network device may receive packets and may calculate, during a time interval, an arrival rate and a departure rate, of the packets, at one of multiple virtual output queues. The network device may calculate a current oversubscription factor based on the arrival rate and the departure rate, and may calculate a target oversubscription factor based on an average of previous oversubscription factors associated with the multiple virtual output queues. The network device may determine whether a difference exists between the target oversubscription factor and the current oversubscription factor and may calculate, when the difference exists, a scale factor based on the current oversubscription factor and the target oversubscription factor. The network device may calculate new scheduling weights based on prior scheduling weights and the scale factor, and may process packets received by the multiple virtual output queues based on the new scheduling weights.

QUEUE SCHEDULING METHOD, APPARATUS, AND SYSTEM
20230155954 · 2023-05-18 ·

A queue scheduling method, apparatus, and system are provided, to flexibly manage a queue, meet an actual transmission requirement, and reduce resources. The queue scheduling method implemented by a processing apparatus includes: generating an HQoS scheduling tree including a plurality of leaf nodes, each of which identifies a queue on a traffic management (TM) hardware entity including a plurality of queues; obtaining traffic characteristics of the plurality of queues based on the plurality of leaf nodes; determining a scheduling parameter of at least one queue in the plurality of queues based on the traffic characteristics which are of data flows transmitted by the plurality of queues; sending to a scheduling apparatus a scheduling message corresponding to the at least one queue in the TM hardware entity, including the scheduling parameter of the at least one queue used to schedule the at least one queue.

Fair Arbitration Between Multiple Sources Targeting a Destination
20230144797 · 2023-05-11 ·

A hardware module comprises at least a first ingress buffer and a second ingress buffer, where the second ingress buffer holds data packets from a plurality of source components. To ensure fairness between one or more sources providing data to the first ingress buffer and the plurality of sources providing data to the second ingress buffer, processing circuitry examines source identifiers in packets held in the second ingress buffer and selects between the buffers so as to arbitrate between the sources. In some embodiments, the examination of the source identifiers provides statistics for a weighted round robin between the ingress buffers. In other embodiments, the source identifier of whichever packet is currently at the head of the second ingress buffer is used to perform a simple round robin between the sources.

LOAD BALANCING FOR A TEAM OF NETWORK INTERFACE CONTROLLERS
20170346885 · 2017-11-30 · ·

An example method is provided for a host to perform load balancing for multiple network interface controllers (NICs) configured as a team. The method may comprise the host detecting egress packets from a virtualized computing instance supported by the host for transmission to a destination via the team. The method may also comprise the host selecting one of the multiple NICs from the team based on load balancing weights associated with the respective multiple NICs. Each load balancing weight may be assigned based on a network speed supported by the associated NIC, and different load balancing weights are indicative of different network speeds among the multiple NICs in the team. The method may further comprise the host sending, via the selected one of the multiple NICs, the egress packets to the destination.

METHOD FOR TRAFFIC SHAPING OF DATA FRAMES IN NETWORK AND DEVICE AND COMPUTER PROGRAM PRODUCT THEREFOR
20170331748 · 2017-11-16 · ·

The present invention relates to packet-switched networks, such as Ethernet, and more particularly to a method for traffic shaping of data frames to transmit in such a telecommunication network, the frames to transmit being distinguished between: express frames, needing to be sent within predetermined time windows, and normal frames, intended to be sent at times outside said time windows. More particularly, for a current normal frame, the method comprises the steps of: determining whether said normal frame can be fragmented, and if yes: determining whether a remaining time to a next time window opening is enough to transmit one or several fragments of said normal frame, and if yes: transmitting said one or several fragments.

Systems and methods for predictive scheduling and rate limiting
11431646 · 2022-08-30 · ·

Systems and methods are disclosed for enhancing network performance by using modified traffic control (e.g., rate limiting and/or scheduling) techniques to control a rate of packet (e.g., data packet) traffic to a queue scheduled by a Quality of Service (QoS) engine for reading and transmission. In particular, the QoS engine schedules packets using estimated packet sizes before an actual packet size is known by a direct memory access (DMA) engine coupled to the QoS engine. The QoS engine subsequently compensates for discrepancies between the estimated packet sizes and actual packet sizes (e.g., when the DMA engine has received an actual packet size of the scheduled packet). Using these modified traffic control techniques that leverage estimating packet sizes may reduce and/or eliminate latency introduced due to determining actual packet sizes.