H04L47/50

Dynamically managed data traffic workflows

Dynamic management of data traffic workflows is performed. An event to perform a data traffic workflow at a remote performance location may be received. Computing resources to perform the data traffic workflow may be identified. Operations to perform the data traffic workflow may be dynamically directed by the identified computing resources to adaptively balance performance of the operations with operations for other data traffic workflows in order to meet respective performance requirements of the data traffic workflows.

Single queue link aggregation
11516117 · 2022-11-29 · ·

A method for transmitting a packet on a logical port comprising two or more physical ports comprises receiving a packet of a class of service; storing the packet in a memory; maintaining a lookup table relating a plurality of identifiers to at least one physical port; storing a pointer to the stored packet in the memory in a single pointer list for the class of service along with a selected one of the identifiers; and copying the stored packet to one or more physical ports corresponding to the selected identifier for transmission on at least one of the physical ports. In one implementation, a plurality of the physical ports are grouped into a logical port, and the received packet is processed to determine its logical port and its class of service.

Routing and control protocol for high-performance interconnect fabrics
11516143 · 2022-11-29 · ·

Operating a computer network uses a routing and control protocol, the computer network having an interconnect fabric including routing and control distribution devices and fabric interface devices, each of the routing and control distribution devices and each of the fabric interface devices having a state machine having an input processing unit having parallel input buffers, an output processing unit having parallel output buffers and an arbiter; operating the state machine based on a set of instructions and a table located at the state machine; transferring data from the input processing unit to the output processing unit; choosing a highest priority currently flit occupied parallel input buffer located in the input processing unit for data transmission on a highest priority currently flit occupied channel; and; interrupting the highest currently flit occupied priority channel when one of the parallel input buffers is detected to contain a superseding even higher priority flit.

Routing and control protocol for high-performance interconnect fabrics
11516143 · 2022-11-29 · ·

Operating a computer network uses a routing and control protocol, the computer network having an interconnect fabric including routing and control distribution devices and fabric interface devices, each of the routing and control distribution devices and each of the fabric interface devices having a state machine having an input processing unit having parallel input buffers, an output processing unit having parallel output buffers and an arbiter; operating the state machine based on a set of instructions and a table located at the state machine; transferring data from the input processing unit to the output processing unit; choosing a highest priority currently flit occupied parallel input buffer located in the input processing unit for data transmission on a highest priority currently flit occupied channel; and; interrupting the highest currently flit occupied priority channel when one of the parallel input buffers is detected to contain a superseding even higher priority flit.

Interspersed message batching in a database system

A message batching configuration may be determined for transmitting a message to recipients. The message batching configuration may include two or more message batches, a respective recipient count for each message batch, a respective time delay between each message batch, and a performance metric for evaluating the message. The message is transmitted in accordance with the message batching configuration. The transmission of subsequent message batches is halted when it is determined that the designated performance metric fails to meet a designated performance metric threshold.

Interspersed message batching in a database system

A message batching configuration may be determined for transmitting a message to recipients. The message batching configuration may include two or more message batches, a respective recipient count for each message batch, a respective time delay between each message batch, and a performance metric for evaluating the message. The message is transmitted in accordance with the message batching configuration. The transmission of subsequent message batches is halted when it is determined that the designated performance metric fails to meet a designated performance metric threshold.

Service Level Adjustment Method and Apparatus, Device, and Storage Medium
20220377016 · 2022-11-24 ·

A service level adjustment method includes obtaining by a first network device at least one piece of queue status information at a target service level of the first network device; when any one of the at least one piece of queue status information exceeds a first threshold upper limit corresponding to the any queue status information, adjusting by the first network device a parameter of the target service level based on a maximum delay associated with the target service level; when the queue status information at the target service level exceeds the corresponding threshold upper limit, adjusting by the first network device the parameter of the target service level based on the associated maximum delay.

Traffic shaping and end-to-end prioritization
11595300 · 2023-02-28 · ·

A method is disclosed, comprising: receiving a first and a second Internet Protocol (IP) packet at a mesh network node; tagging the first and the second IP packet at the mesh network node based on a type of traffic by adding an IP options header to each of the first and the second IP packet; forwarding the first and the second IP packet toward a mesh gateway node; filtering the first and the second IP packet at the mesh gateway node based on the added IP options header by assigning each of the first and the second IP packet to one of a plurality of message queues, each of the plurality of message queues having a limited forwarding throughput; and forwarding the first and the second IP packet from the mesh gateway node toward a mobile operator core network, thereby providing packet flow filtering based on IP header and traffic type.

Traffic shaping and end-to-end prioritization
11595300 · 2023-02-28 · ·

A method is disclosed, comprising: receiving a first and a second Internet Protocol (IP) packet at a mesh network node; tagging the first and the second IP packet at the mesh network node based on a type of traffic by adding an IP options header to each of the first and the second IP packet; forwarding the first and the second IP packet toward a mesh gateway node; filtering the first and the second IP packet at the mesh gateway node based on the added IP options header by assigning each of the first and the second IP packet to one of a plurality of message queues, each of the plurality of message queues having a limited forwarding throughput; and forwarding the first and the second IP packet from the mesh gateway node toward a mobile operator core network, thereby providing packet flow filtering based on IP header and traffic type.

Network-based coordination of loss/delay mode for congestion control of latency-sensitive flows

A controller of a network, including routers to forward flows of packets originated at senders to receivers along distinct network paths each including multiple links, such that the flows merge at a common link that imposes a traffic bottleneck on the flows, receives from one or more of the routers router reports that each indicate an aggregate packet loss that represents an aggregate of packet losses experienced by each of the flows at the common link. The controller sends to the senders aggregate loss reports each including the aggregate packet loss so that the senders have common packet loss information for the common link on which to base decisions as to whether to switch from delay-based to loss-based congestion control modes when implementing dual-mode congestion control of the flows. In lieu of the controller, another example employs in-band router messages populated with packet losses by the routers the messages traverse.