Patent classifications
H04L47/623
PROGRAMMABLE TRAFFIC MANAGEMENT ENGINE
Examples herein describe a programmable traffic management engine that includes both programmable and non-programmable hardware components. The non-programmable hardware components are used to generate features that can then be used to perform different traffic management algorithms. Depending on which traffic management algorithm the PTM engine is configured to do, the PTM engine may use a subset (or all) of the features to perform the algorithm. The programmable hardware components in the PTM engine are programmable (e.g., customizable) by the user to perform a selected algorithm using some or all of the features provided by the non-programmable hardware components.
Weighted load balancing using scaled parallel hashing
A method for weighted data traffic routing can include receiving a data packet at data switch, where the data switch includes a plurality of egress ports. The method can also include, for each of the egress ports, generating an independent hash value based on one or more fields of the data packet and generating a weighted hash value by scaling the hash value using a scaling factor. The scaling factor can be based on at least two traffic routing weights of a plurality of respective traffic routing weights associated with the plurality of egress ports. The method can further include selecting an egress port of the plurality of egress ports based on the weighted hash value for each of the egress ports and transmitting the data packet using the selected egress port.
Load balancing among network links using an efficient forwarding scheme
A network element includes multiple output ports and circuitry. The multiple output ports are configured to transmit packets over multiple respective network links of a communication network. The circuitry is configured to receive from the communication network, via one or more input ports of the network element, packets that are destined for transmission via the multiple output ports, to monitor multiple data-counts, each data-count corresponding to a respective output port, and is indicative of a respective data volume of the packets forwarded for transmission via the respective output port, to select for a given packet, based on the data-counts, an output port among the multiple output ports, and to forward the given packet for transmission via the selected output port.
Waterfall granting
Waterfall granting may be provided. First, a plurality of grants may be received for a service flow. Then a first plurality of packets may be placed in a first queue associated with the service flow in response to determining that the first plurality of packets corresponding to the service flow are associated with a first quality of service level. Next, a second plurality of packets may be placed in a second queue associated with the service flow in response to determining that the second plurality of packets corresponding to the service flow are associated with a second quality of service level. The first plurality of packets in the first queue may then be serviced from the plurality of grants until all the first plurality of packets in the first queue are serviced before servicing any of the second plurality of packets in the second queue with remaining ones of the plurality of grants.
Congestion Control Processing Method, Packet Forwarding Apparatus, and Packet Receiving Apparatus
A congestion control processing method uses a two-level scheduling manner of a forwarding device and a destination device, where the network device of a data center network performs coarse-grained bandwidth allocation based on weights of flows destined for different destination devices. The network device allocates each flow a bandwidth that does not cause congestion, and notifies the destination device. The destination device performs fine-grained division, determines a maximum sending rate for each flow, and notifies a packet sending device of the maximum sending rate.
System and Method for Latency Critical Quality of Service Using Continuous Bandwidth Control
A system and method are provided for a bandwidth manager for packetized data designed to arbitrate access between multiple, high bandwidth, ingress channels (sources) to one, lower bandwidth, egress channel (sink). The system calculates which source to grant access to the sink on a word-to-word basis and intentionally corrupts/cuts packets if a source ever loses priority while sending. Each source is associated with a ranking that is recalculated every data word. When a source buffer sends enough words to have its absolute rank value increase above that of another source buffer waiting to send, the system “cuts” the current packet by forcing the sending buffer to stop mid-packet and selects a new, lower ranked, source buffer to send. When there are multiple requesting source buffers with the same rank, the system employs a weighted priority randomized scheduler for buffer selection.
SCHEDULING SOLUTION CONFIGURATION METHOD AND APPARATUS, COMPUTER READABLE STORAGE MEDIUM THEREOF, AND COMPUTER DEVICE
A scheduling scheme configuration method includes performing state verification on a plurality of operation dimensions involved in generating a scheduling scheme, and, in response to one or more of the operation dimensions being abnormal, removing the one or more abnormal operation dimensions to generate a new scheduling scheme.
Relay device
A relay device accumulates received frames that are not determined as a specific frame in a queue and transfers the frames accumulated in the queue one by one according to a predetermined rule. The relay device transfers a received frame that is determined as a specific frame priority to the frames accumulated in the queue without accumulating the specific frame in the queue.
ARBITER WITH RANDOM TIE BREAKING
Candidates for selection in a weighted arbitration system are assigned priority weights and random weights. The winning candidate is determined using a tree of selectors such as a comparators. At each stage of the tree, the candidate having the greatest priority weight is selected to pass to the next stage. If multiple candidates are tied for the greatest priority weight, the candidate having the greatest random weight is selected to pass to the next stage.
Configurable packet arbitration with minimum progress guarantees
Systems, apparatuses, and methods for implementing a configurable packet arbiter with minimum progress guarantees are described. An arbiter includes at least control logic, a plurality of counters, and a tunables matrix. The tunables matrix stores values for a plurality of configurable parameters for the various transaction sources of the arbiter. These parameter values determine the settings that the arbiter uses for performing arbitration. One of the parameters is a minimum progress guarantee value that specifies how many times each source should be picked per interval. The minimum progress guarantee helps to reduce arbitration-related jitter. Also, the arbiter includes a grant counter for each source. After the minimum progress guarantees are satisfied, the arbiter selects the source with the lowest grant counter among the sources with packets eligible for arbitration. Then, the arbiter increments the grant counter of the winning source by a grant increment amount specific to the source.