H04L49/501

Method and System for Balancing Storage Data Traffic in Converged Networks
20180006874 · 2018-01-04 ·

Methods for balancing storage data traffic in a system in which at least one computing device (server) coupled to a converged network accesses at least one storage device coupled (by at least one adapter) to the network, systems configured to perform such methods, and devices configured to implement such methods or for use in such systems. Typically, the system includes servers and adapters, and server agents implemented on the servers and adapter agents implemented on the adapters are configured to detect and respond to imbalances in storage and data traffic in the network, and to redirect the storage data traffic to reduce the imbalances and, thereby to improve the overall network performance (for both data communications and storage traffic). Typically, each agent operates autonomously (except in that an adapter agent may respond to a request or notification from a server agent), and no central computer or manager directs operation of the agents.

NETWORK LOAD BALANCER, REQUEST MESSAGE DISTRIBUTION METHOD, PROGRAM PRODUCT AND SYSTEM
20220407916 · 2022-12-22 ·

A network load balancer, a request message distribution method, a program product, and a system provided by the present disclosure relate to cloud computing technology. The network load balancer includes: a network port and N intermediate chips; the N intermediate chips are connected in sequence; the network port is connected to a first intermediate chip among the N intermediate chips; N is a positive integer greater than or equal to 1; the network port is configured to receive a request message and forward the request message to the first intermediate chip; each of the intermediate chips is configured to forward the request message to a next intermediate chip connected to a current intermediate chip if connection information matching the request message is not found; and transmit the request message to a background server according to the connection information if the connection information matching the request message is found.

Time-division multiplexing scheduler and scheduling device
20230057059 · 2023-02-23 ·

A time-division multiplexing (TDM) scheduler determines a service order for serving N packet transmission requesters. The TDM scheduler includes: N current count value generators configured to serve the N packet transmission requesters respectively, and generate N current count values according to parameters of the N packet transmission requesters, a previous scheduling result generated by the EDD scheduler previously, and a predetermined counting rule; and an earliest due date (EDD) scheduler configured to generate a current scheduling result for determining the service order according to the N current count values and a predetermined urgency decision rule, wherein an extremum of the N current count values relates to one of the N packet transmission requesters, and the EDD scheduler selects this requester as the one to be served preferentially.

Selectively shedding processing loads associated with updates to a routing table in a fifth generation (5G) or other next generation network

The technologies described herein are generally directed toward shedding processing loads associated with route updates. According to an embodiment, a system can comprise a processor and a memory that can enable operations facilitating performance of operations including facilitating receiving, from a second routing device, a route update for a routing table of the first routing device, wherein the route update is associated with a route. The operations can further comprise evaluating a value of the route update, resulting in an evaluated value of the route update. The operations can further comprise updating an entry of the routing table based on the route update and the evaluated value of the route update.

Data processing network with flow compaction for streaming data transfer

An improved protocol for data transfer between a request node and a home node of a data processing network that includes a number of devices coupled via an interconnect fabric is provided that minimizes the number of response messages transported through the interconnect fabric. When congestion is detected in the interconnect fabric, a home node sends a combined response to a write request from a request node. The response is delayed until a data buffer is available at the home node and home node has completed an associated coherence action. When the request node receives a combined response, the data to be written and the acknowledgment are coalesced in the data message.

UPLINK FAILURE REBALANCING

Embodiments herein facilitate the modification of data traffic load balancing on information handling systems affected by a networking information handling system having the status of one or more of its uplinks changed from operable to inoperable or from inoperable to operable. In one or more embodiments, an agent operating on or in conjunction with a networking information handling system (e.g., a TOR) detects a change in one its links. The agent sends a message to information handling system(s) (e.g., hosts) that are communicatively coupled to the TOR regarding the change in status. Based upon the TOR's message, a host may adjust its traffic load balancing to compensate for the status change. Embodiments, therefore, help efficiently utilize network pathways.

REVIEW AND RETRY FOR MINIMUM SPEED PORT CHANNEL
20220321268 · 2022-10-06 ·

A review and retry mechanism ensures a port channel can be configured to provide and maintain a minimum data speed. A timer-based review sequence reviews the constituent interfaces of a port channel to determine if a minimum speed requirement is met. If the minimum speed cannot be fulfilled, the port-channel member interfaces are un-programmed and removed from the port-channel, rendering the port-channel functionally inactive, thereby preventing network traffic loss. A timer-based retry sequence attempts to program the constituent interfaces. The minimum speed requirement of the interfaces is checked in the next review cycle. If the minimum speed requirement is met, then the review and retry mechanism halts and the port channel continues to remain active; otherwise, the interfaces are un-programmed and the process repeats.

Dynamically reconfiguring data plane of forwarding element to account for power consumption
11689424 · 2023-06-27 · ·

Some embodiments of the invention provide a network forwarding element that can be dynamically reconfigured to adjust its data message processing to stay within a desired operating temperature or power consumption range. In some embodiments, the network forwarding element includes (1) a data-plane forwarding circuit (“data plane”) to process data tuples associated with data messages received by the IC, and (2) a control-plane circuit (“control plane”) for configuring the data plane forwarding circuit. The data plane includes several data processing stages to process the data tuples. The data plane also includes an idle-signal injecting circuit that receives from the control plane configuration data that the control plane generates based on the IC's temperature. Based on the received configuration data, the idle-signal injecting circuit generates idle control signals for the data processing stages. Each stage that receives an idle control signal enters an idle state during which the majority of the components of that stage do not perform any operations, which reduces the power consumed and temperature generated by that stage during its idle state.

Traffic distribution method and apparatus in hybrid access network

A traffic distribution method and apparatus in a hybrid access network, where the method includes transmitting, by a hybrid access aggregation point (HAAP), probe traffic using a second tunnel after determining that a first tunnel is congested when user traffic is transmitted over the first tunnel, obtaining, by the HAAP, a status of the first tunnel and a status of the second tunnel, determining, by the HAAP according to the status of the first tunnel and the status of the second tunnel, whether the status of the first tunnel and the status of the second tunnel meet an offloading condition, and transmitting, by the HAAP, the user traffic using the first tunnel and the second tunnel after determining that the status of the first tunnel and the status of the second tunnel meet the offloading condition.

Predictable virtualized NIC

A method for controlling congestion in a datacenter network or server is described. The server includes a processor configured to host a plurality of virtual machines and an ingress engine configured to maintain a plurality of per-virtual machine queues configured to store received packets. The processor is also configured to execute a CPU-fair fair queuing process to control the processing of the packets by the processor. The processor is also configured to selectively trigger temporary packet per second packet transmission limits on top of a substantially continuously enforced bit per second transmission limit upon detection of a per virtual machine queue overload.