H04L47/623

BANDWIDTH CONTROL INSIDE A SHARED NETWORK INTERFACE CARD
20230300081 · 2023-09-21 · ·

A smart network interface card (smartNIC) may receive first traffic for a first process configured with a first bandwidth limit. The smartNIC may receive second traffic for a second process configured with a second bandwidth limit, the second bandwidth limit corresponding to a larger value between a second transmit limit and a second receive limit associated with the second process. The smartNIC may queue the received traffic associated with the first process and the second process in a scheduler, the scheduler having a first set of queues configured to store traffic from the first process, and a second set of queues configured to store traffic from the second process. The smartNIC may forward queued traffic from the first set of queues or the second set of queues, a maximum amount of forwarded first process traffic corresponding to the first bandwidth limit minus an amount of forwarded second process traffic.

Devices and methods for managing network traffic for a distributed cache

A programmable switch includes ports, and circuitry to receive cache messages for a distributed cache from client devices. The cache messages are queued for sending to memory devices from the ports. Queue occupancy information is generated and sent to a controller that determines, based at least in part on the queue occupancy information, at least one of a cache message transmission rate for a client device, and one or more weights for the queues used by the programmable switch. In another aspect, the programmable switch extracts cache request information from a cache message. The cache request information indicates a cache usage and is sent to the controller, which determines, based at least in part on the extracted cache request information, at least one of a cache message transmission rate for a client device, and one or more weights for queues used in determining an order for sending cache messages.

METHOD AND MODULE OF PRIORITY DIVISION AND QUEUE SCHEDULING FOR COMMUNICATION SERVICES IN SMART SUBSTATION
20210367895 · 2021-11-25 ·

A method for dividing communication services in smart substation into different priorities, the method including: determining the priority of a message to be sent according to the service type and its priority definition; the communication services includes trip message, state change message, sampled value message, device status message, time synchronization message, and file transfer message; the corresponding priority is respectively defined as 7, 6, 5, 4, 3, 1; and filling the user priority field of IEEE802.1Q label in a message header with a binary value corresponding to its priority.

PROGRAMMING HIERARCHICAL SCHEDULERS FOR PORTS OF NETWORK DEVICES BASED ON HUMAN-READABLE CONFIGURATIONS

Embodiments of the present disclosure include techniques for programming hierarchical schedulers for ports of network devices. A configuration for configuring a hierarchy of a plurality of scheduling nodes of a packet scheduler is received. The packet scheduler is configured to schedule packets for egress out of a port of the network device. The configuration is specified in a human-readable format. Based on the configuration, the packet scheduler of the port is programmed. A plurality of packets are received at a plurality of physical queues communicatively coupled to the packet scheduler. The packet scheduler is used to select a packet in the plurality of packets from a physical queue in the plurality of physical queues. The selected packet is forwarded out the port of the network device.

Congestion control processing method, packet forwarding apparatus, and packet receiving apparatus
11805071 · 2023-10-31 · ·

A congestion control processing method uses a two-level scheduling manner of a forwarding device and a destination device, where the network device of a data center network performs coarse-grained bandwidth allocation based on weights of flows destined for different destination devices. The network device allocates each flow a bandwidth that does not cause congestion, and notifies the destination device. The destination device performs fine-grained division, determines a maximum sending rate for each flow, and notifies a packet sending device of the maximum sending rate.

Determining rate differential weighted fair output queue scheduling for a network device

A network device may receive packets and may calculate, during a time interval, an arrival rate and a departure rate, of the packets, at one of multiple virtual output queues. The network device may calculate a current oversubscription factor based on the arrival rate and the departure rate, and may calculate a target oversubscription factor based on an average of previous oversubscription factors associated with the multiple virtual output queues. The network device may determine whether a difference exists between the target oversubscription factor and the current oversubscription factor and may calculate, when the difference exists, a scale factor based on the current oversubscription factor and the target oversubscription factor. The network device may calculate new scheduling weights based on prior scheduling weights and the scale factor, and may process packets received by the multiple virtual output queues based on the new scheduling weights.

Binding application to namespace (NS) to set to submission queue (SQ) and assigning performance service level agreement (SLA) and passing it to a storage device

A host interface layer in a storage device is described. The host interface layer may include an arbitrator to select a first submission queue (SQ) from a set including at least the first SQ and a second SQ. The first SQ may be associated with a first Quality of Service (QoS) level, and the second SQ may be associated with a second QoS level. A command fetcher may retrieve an input/output (I/O) request from the first SQ. A command parser may place the I/O request in a first command queue from a set including at least the first command queue and a second command queue. The arbitrator may be configured to select the first SQ based at least in part on a first weight associated with the first SQ and a second weight associated with the second SQ. The first weight may be based at least in part on a first total storage capacity of at least one first namespace (NS) associated with the first QoS level, and the second weight may be based at least in part on a second total storage capacity of at least one second NS associated with the second QoS level.

DETERMINING RATE DIFFERENTIAL WEIGHTED FAIR OUTPUT QUEUE SCHEDULING FOR A NETWORK DEVICE
20220264364 · 2022-08-18 ·

A network device may receive packets and may calculate, during a time interval, an arrival rate and a departure rate, of the packets, at one of multiple virtual output queues. The network device may calculate a current oversubscription factor based on the arrival rate and the departure rate, and may calculate a target oversubscription factor based on an average of previous oversubscription factors associated with the multiple virtual output queues. The network device may determine whether a difference exists between the target oversubscription factor and the current oversubscription factor and may calculate, when the difference exists, a scale factor based on the current oversubscription factor and the target oversubscription factor. The network device may calculate new scheduling weights based on prior scheduling weights and the scale factor, and may process packets received by the multiple virtual output queues based on the new scheduling weights.

NOF-based read control method, apparatus, and system

A NOF-based read control method, apparatus, and system belong to the field of networked storage. The method includes: receiving, by a NOF engine by using a communication link, a read request sent by a host; sending at least one read command to an NVMe hard disk based on the read request; and when congestion occurs on the communication link, generating a congestion flag corresponding to the communication link, and sending the congestion flag to the NVMe hard disk, where the congestion flag is used to instruct the NVMe hard disk to suspend processing of the read command corresponding to the communication link.

System and method for latency critical quality of service using continuous bandwidth control

A system and method are provided for a bandwidth manager for packetized data designed to arbitrate access between multiple, high bandwidth, ingress channels (sources) to one, lower bandwidth, egress channel (sink). The system calculates which source to grant access to the sink on a word-to-word basis and intentionally corrupts/cuts packets if a source ever loses priority while sending. Each source is associated with a ranking that is recalculated every data word. When a source buffer sends enough words to have its absolute rank value increase above that of another source buffer waiting to send, the system “cuts” the current packet by forcing the sending buffer to stop mid-packet and selects a new, lower ranked, source buffer to send. When there are multiple requesting source buffers with the same rank, the system employs a weighted priority randomized scheduler for buffer selection.