H04L49/254

Filter with engineered damping for load-balanced fine-grained adaptive routing in high-performance system interconnect
11637778 · 2023-04-25 · ·

A switch is provided for routing packets in an interconnection network. The switch includes a plurality of egress ports to transmit packets. The switch also includes one or more ingress ports to receive packets. The switch also includes a port and bandwidth capacity circuit configured to obtain (i) port capacity for a plurality of egress ports of the switch, and (ii) bandwidth capacity for transmitting packets to a destination. The switch also includes a network capacity circuit configured to compute network capacity, for transmitting packets to the destination, via the plurality of egress ports, based on a function of the port capacity and the bandwidth capacity. The switch also includes a routing circuit configured to route one or more packets received via one or more ingress ports of the switch, to the destination, via the plurality of egress ports, based on the network capacity.

RESOURCE FAIRNESS ENFORCEMENT IN SHARED IO INTERFACES
20230121317 · 2023-04-20 ·

Described are platforms, systems, and methods for resource fairness enforcement. In one aspect, a programmable input output (IO) device comprises a memory unit, the memory unit having instructions stored thereon which, when executed by the programmable IO device, cause the programmable IO device to perform operations comprising: receiving an input from a logical interface (LIF); determining, by at least one meter, a metric regarding at least one resource used during a processing of the input through a programmable pipeline; and regulating additional input received from the LIF based on the metric and a threshold for the at least one resource.

Off-Chip Memory Backed Reliable Transport Connection Cache Hardware Architecture

An application specific integrated circuit (ASIC) is provided for reliable transport of packets. The network interface card may include a reliable transport accelerator (RTA). The RTA may include a cache lookup database. The RTA may be configured to determine, from a received data packet, a connection identifier and query the cache lookup database for a cache entry corresponding to a connection context having the connection identifier. In response to the query, the RTA may receive a cache hit or a cache miss.

METHODS AND SYSTEMS FOR NETWORK FLOW TRACING WITHIN A PACKET PROCESSING PIPELINE

Network appliances can use packet processing pipeline circuits to implement network rules for processing network packet flows by configuring the pipeline's processing stages to execute specific policies for specific network packets in accordance with the network rules. Trace reports that indicate network rules implemented at specific processing stages can be more informative than those indicating policies implemented by the processing stages. A method implemented by a network appliance can store network rules for processing network flows by the processing stages of a packet processing pipeline circuit. The method can produce a trace report in response to to receiving a trace directive for one of the network flows wherein one of the processing stages has applied a network rule to a network packet in one of the network flows. The trace report can indicate the network rule in association with the processing stage and the network flow.

Line side multiplexers with protection switching
11470038 · 2022-10-11 · ·

The present invention is directed to data communication systems and techniques thereof. In a specific embodiment, the present invention provides a network connector that includes an interface for connecting to a host. The interface includes a circuit for utilizing two data paths for the host. The circuit is configured to transform the host address to different addresses based on the data path being used. There are other embodiments as well.

Method for test traffic generation and inspection, and associated switch input or output port and switch

Disclosed is a method for test traffic generation, at test-sending switches for a network of calculation nodes, and of inspection of this test traffic, at test-receiving switches of this network, including: the generation and sending of test traffic, at least at a selected test-sending input or output port of one selected test-sending switch, sent to at least one selected test-receiving input or output port of a selected test-receiving switch, where the test traffic is generated and sent by a traffic generation component configured as an additional input of the selected test-sending input or output port, where the test traffic is inspected by a traffic inspection component configured for filtering the output of the selected test-receiving input or output port.

Fair Arbitration Between Multiple Sources Targeting a Destination
20230144797 · 2023-05-11 ·

A hardware module comprises at least a first ingress buffer and a second ingress buffer, where the second ingress buffer holds data packets from a plurality of source components. To ensure fairness between one or more sources providing data to the first ingress buffer and the plurality of sources providing data to the second ingress buffer, processing circuitry examines source identifiers in packets held in the second ingress buffer and selects between the buffers so as to arbitrate between the sources. In some embodiments, the examination of the source identifiers provides statistics for a weighted round robin between the ingress buffers. In other embodiments, the source identifier of whichever packet is currently at the head of the second ingress buffer is used to perform a simple round robin between the sources.

DEVICE AND METHOD FOR QUEUES RELEASE AND OPTIMIZATION BASED ON RUN-TIME ADAPTIVE AND DYNAMIC INTERNAL PRIORITY VALUE STRATEGY

The present disclosure relates to controlling queue release in a network. In particular, the disclosure proposes a controller configured to obtain a state of each of a plurality of queues of a network node and determine, based on the states of the queues, whether the utilization of one or more queues exceeds one or more thresholds. If one or more thresholds are exceeded, the controller is configured to generate one or more new priority entries for one or more queues of the plurality of queues and provide the one or more new priority entries to the one or more queues of the network node. Further, the disclosure proposes a network node being configured to provide a state of each of a plurality of queues to a controller, and obtain one or more new priority entries for one or more queues of the plurality of queues from the controller.

DEVICE AND METHOD FOR QUEUES RELEASE AND OPTIMIZATION BASED ON RUN-TIME ADAPTIVE AND DYNAMIC GATE CONTROL LIST STRATEGY

A controller is configured to: obtain a state of each of a plurality of queues of a network node; determine, based on the states of the queues, whether the utilization of one or more queues exceeds one or more thresholds; generate one or more new entries for a gate control list of the network node that controls the plurality of queues, if one or more thresholds are exceeded; and provide the one or more new entries to the network node. Further, a network node is configured to provide a state of each of a plurality of queues to a controller, and obtain one or more new entries for a gate control list of the network node that controls the plurality of queues, from the controller.

Techniques for Virtual Ethernet Switching of a Multi-Node Fabric
20170373991 · 2017-12-28 ·

Examples include techniques for virtual Ethernet switching of a multi-node fabric. In some examples, first Ethernet links coupled with a group of Ethernet gateways are link aggregated. The group of Ethernet gateways couple with respective individual physical switch ports of a fabric switch of a multi-node fabric to form a default logical gateway to provide an uplink between a virtual Ethernet switch and an Ethernet network external to the multi-node fabric. Also, one or more individual Ethernet gateways coupled with respective individual physical switch ports of the fabric switch may be arranged to provide one or more respective downlinks between the virtual Ethernet switch and one or more Ethernet nodes external to the multi-node fabric via respective second Ethernet links coupled with the one or more individual Ethernet gateways.