Patent classifications
H04L49/50
ADAPTING FORWARDING DATABASE LEARNING RATE BASED ON FILL LEVEL OF FORWARDING TABLE
A packet processor of a network device repeatedly determines a fill level of a forwarding table that is populated with associations between network addresses and network interfaces of, or coupled to, the network device. The packet processor adjusts, based on the fill level of the forwarding table, a maximum rate according to which the packet processor is permitted to send messages to a central processing unit (CPU) coupled to the packet processor, the messages indicating network addresses that are to be stored in the forwarding table by the CPU. The packet processor of the network device receives packets via network links coupled to the network device; identifies new network addresses of the packets that are not in the forwarding table; and sends messages to the CPU at a rate that does not exceed the maximum rate, the messages indicating the new network addresses are to be added to the forwarding table.
Congestion control method and related device
Embodiments of this application disclose a congestion control method and a related device. A Transmission Control Protocol offload engine TOE sends a congestion control notification to a central processing unit CPU, where the congestion control notification instructs the CPU to obtain a target parameter, and the target parameter is used by the CPU to generate a congestion control calculation result. The TOE obtains the congestion control calculation result returned by the CPU, where the congestion control calculation result includes a congestion control window value. The TOE sends a packet based on the congestion control window value. In this application, the TOE and the CPU implement congestion control together. When a new congestion control algorithm emerges, the new congestion control algorithm may be applied without changing a structure of the TOE. Therefore, in this application, an upgrade period of the congestion control algorithm can be shortened, and flexibility can be improved.
Congestion control method and related device
Embodiments of this application disclose a congestion control method and a related device. A Transmission Control Protocol offload engine TOE sends a congestion control notification to a central processing unit CPU, where the congestion control notification instructs the CPU to obtain a target parameter, and the target parameter is used by the CPU to generate a congestion control calculation result. The TOE obtains the congestion control calculation result returned by the CPU, where the congestion control calculation result includes a congestion control window value. The TOE sends a packet based on the congestion control window value. In this application, the TOE and the CPU implement congestion control together. When a new congestion control algorithm emerges, the new congestion control algorithm may be applied without changing a structure of the TOE. Therefore, in this application, an upgrade period of the congestion control algorithm can be shortened, and flexibility can be improved.
Control of a computing system to perform network fabric benchmark measurements
In one embodiment, a method selects a percentage of a plurality hosts that are coupled together via a network fabric and calculates a number of workloads needed for the percentage of hosts based on a benchmark test to run. A plurality of data compute nodes are configured on one or more host pairs in the percentage of the plurality of hosts to send and receive the number of workloads through the network fabric to perform the benchmark test. A set of measurements is received for sending and receiving the workloads through the network fabric using the plurality of data compute nodes. The method increases the percentage of the plurality of hosts until the set of measurements fails a criteria or the percentage of the plurality of hosts is all of the plurality of hosts.
Control of a computing system to perform network fabric benchmark measurements
In one embodiment, a method selects a percentage of a plurality hosts that are coupled together via a network fabric and calculates a number of workloads needed for the percentage of hosts based on a benchmark test to run. A plurality of data compute nodes are configured on one or more host pairs in the percentage of the plurality of hosts to send and receive the number of workloads through the network fabric to perform the benchmark test. A set of measurements is received for sending and receiving the workloads through the network fabric using the plurality of data compute nodes. The method increases the percentage of the plurality of hosts until the set of measurements fails a criteria or the percentage of the plurality of hosts is all of the plurality of hosts.
Time-division multiplexing scheduler and scheduling device
A time-division multiplexing (TDM) scheduler determines a service order for serving N packet transmission requesters. The TDM scheduler includes: N current count value generators configured to serve the N packet transmission requesters respectively, and generate N current count values according to parameters of the N packet transmission requesters, a previous scheduling result generated by the EDD scheduler previously, and a predetermined counting rule; and an earliest due date (EDD) scheduler configured to generate a current scheduling result for determining the service order according to the N current count values and a predetermined urgency decision rule, wherein an extremum of the N current count values relates to one of the N packet transmission requesters, and the EDD scheduler selects this requester as the one to be served preferentially.
Locally-managed PoE switch and management system
A local management-based Power Over Ethernet (PoE) switch and a management system. The PoE switch includes a casing, a Liquid Crystal Display (LCD) screen, a monitoring Micro Control Unit (MCU) module, a power system module, a display module, a PoE system module, a switch system module, a key group arranged on the casing, and a key module. The key module transmits information to the MCU module through the display module, and the MCU module connected with the display module, the PoE system module, the switch system module and the key module respectively through a bus performs corresponding operation according to the information. By adoption of the technical solution, working states of the PoE and switch system modules are visually displayed on the screen, and then are correspondingly processed according to the information and displayed on the screen.
Switch fabric packet flow reordering
An ingress fabric endpoint coupled to a switch fabric within a network device reorders packet flows based on congestion status. In one example, the ingress fabric endpoint receives packet flows for switching across the switch fabric. The ingress fabric endpoint assigns each packet for each packet flow to a fast path or a slow path for packet switching. The ingress fabric endpoint processes, to generate a stream of cells for switching across the switch fabric, packets from the fast path and the slow path to maintain a first-in-first-out ordering of the packets within each packet flow. The ingress fabric endpoint switches a packet of a first packet flow after switching a packet of a second packet flow despite receiving the packet of the first packet flow before the packet of the second packet flow.
GRADED THROTTLING FOR NETWORK-ON-CHIP TRAFFIC
Graded throttling for network-on-chip traffic, including: calculating, by an agent of a network-on-chip, a number of outstanding transactions issued by the agent; determining that the number of outstanding transactions meets a threshold; and implementing, by the agent, in response to the number of outstanding transactions meeting the threshold, a traffic throttling policy.
Methods and systems for fast upgrade or reboot for network device
Embodiments of the present disclosure are directed to protocol state transition and/or resource state transition tracker configured to monitor, e.g., via filters, for certain protocol state transitions/changes or host hardware resource transitions/changes when a host processor in the control plane that performs such monitoring functions is unavailable or overloaded. The filters, in some embodiments, are pre-computed/computed by the host processor and transmitted to the protocol state transition and/or resource state transition tracker. The protocol state transition and/or resource state transition tracker may be used to implement a fast upgrade operation as well as load sharing and or load balancing operation with control plane associated components.