H04L49/3027

Distributed artificial intelligence extension modules for network switches
11516149 · 2022-11-29 · ·

Distributed machine learning systems and other distributed computing systems are improved by compute logic embedded in extension modules coupled directly to network switches. The compute logic performs collective actions, such as reduction operations, on gradients or other compute data processed by the nodes of the system. The reduction operations may include, for instance, summation, averaging, bitwise operations, and so forth. In this manner, the extension modules may take over some or all of the processing of the distributed system during the collective phase. An inline version of the module sits between a switch and the network. Data units carrying compute data are intercepted and processed using the compute logic, while other data units pass through the module transparently to or from the switch. Multiple modules may be connected to the switch, each coupled to a different group of nodes, and sharing intermediate results. A sidecar version is also described.

Quality of service in virtual service networks

A switch in a slice-based network can be used to enforce quality of service (“QoS”). Agents can run in the switches, such as in the core of each switch. The switches can sort ingress packets into slice-specific ingress queues in a slice-based pool. The slices can have different QoS prioritizations. A switch-wide policing algorithm can move the slice-specific packets to egress interfaces. Then, one or more user-defined egress policing algorithms can prioritize which packets are sent out into the network first based on slice classifications.

Traffic management in a network switching system with remote physical ports
11588757 · 2023-02-21 · ·

In a switching system that comprises a central switching device an at least one port extender device, the central switching device includes at least one port configured to interface with the port extender device, and the port extender device includes a plurality of front ports for interfacing with one or more networks. The central switching device includes a processor that processes packets received from the at least one port extender device, and a plurality of egress queues for storing processed packets that are to be forwarded to the at least one port extender device for transmission via ones of the front ports. The central switching device also includes a flow control processor configured to, responsively to flow control messages received from the at least one port extender device, control transmission of packets to the at least one port extender device to prevent overflow of egress queues of the port extender device.

METHOD AND SYSTEM FOR FACILITATING LOSSY DROPPING AND ECN MARKING
20230046350 · 2023-02-16 ·

Methods and systems are provided for performing lossy dropping and ECN marking in a flow-based network. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow are acknowledged after reaching the egress point of the network, and the acknowledgement packets are sent back to the ingress point of the flow along the same data path. As a result, each switch can obtain state information of each flow and perform per-flow packet dropping and ECN marking.

METHOD AND APPARATUS FOR SCHEDULING PACKETS FOR TRANSMISSION

A network device transfers packets from a packet memory to one or more network interfaces for transmission by the one or more network interfaces. The transferring of packets includes transferring the packets via one or more respective transmit data paths that correspond to one or more respective network interfaces. The network device measures one or more respective amounts of time required to transmit respective packet data within the one or more respective transmit data paths. The network device uses the one or more respective measured amounts of time to determine when to start transfer of packets from the packet memory to the one or more network interfaces via the one or more respective transmit data paths.

Processing packets in an electronic device

A network traffic manager receives, from an ingress port in a group of ingress ports, a cell of a packet destined for an egress port. Upon determining that a number of cells of the packet stored in a buffer queue meets a threshold value, the manager checks whether the group of ingress ports has been assigned a token for the queue. Upon determining that the group of ingress ports has been assigned the token, the manager determines that other cells of the packet are stored in the buffer, and accordingly stores the received cell in the buffer, and stores linking information for the received cell in a receive context for the packet. When all cells of the packet have been received, the manager copies linking information for the packet cells to the buffer queue or a copy generator queue, and releases the token from the group of ingress ports.

Telemetry and Buffer-Capacity Based Circuits for Load-Balanced Fine-Grained Adaptive Routing in High-Performance System Interconnect
20230131022 · 2023-04-27 · ·

A switch is provided for routing packets in an interconnection network. The switch includes egress ports to transmit packets, and ingress ports to receive packets. The switch also includes a buffer capacity circuit configured to obtain local buffer capacity for buffers configured to buffer packets transmitted via the switch. The switch also includes a telemetry circuit configured to receive telemetry flow control units from next switches coupled to the switch. Each telemetry flow control unit corresponds to buffer capacity at a respective next switch. The switch also includes a network capacity circuit configured to compute network capacity for transmitting packets to a destination based on the telemetry flow control units and the local buffer capacity. The switch also includes a routing circuit configured to receive packets via the ingress ports, and route the packets to the destination, via the egress ports, with bandwidth proportional to the network capacity.

Method and system for effective use of internal and external memory for packet buffering within a network device

A mechanism is provided to maximize utilization of internal memory for packet queuing in network devices, while providing an effective use of both internal and external memory to achieve high performance, high buffering scalability, and minimizing power utilization. Embodiments initially store packet data received by the network device in queues supported by an internal memory. If internal memory utilization crosses a predetermined threshold, a background task performs memory reclamation by determining those queued packets that should be targeted for transfer to an external memory. Those selected queued packets are transferred to external memory and the internal memory is freed. Once the internal memory consumption drops below a threshold, the reclamation task stops.

Filter with engineered damping for load-balanced fine-grained adaptive routing in high-performance system interconnect
11637778 · 2023-04-25 · ·

A switch is provided for routing packets in an interconnection network. The switch includes a plurality of egress ports to transmit packets. The switch also includes one or more ingress ports to receive packets. The switch also includes a port and bandwidth capacity circuit configured to obtain (i) port capacity for a plurality of egress ports of the switch, and (ii) bandwidth capacity for transmitting packets to a destination. The switch also includes a network capacity circuit configured to compute network capacity, for transmitting packets to the destination, via the plurality of egress ports, based on a function of the port capacity and the bandwidth capacity. The switch also includes a routing circuit configured to route one or more packets received via one or more ingress ports of the switch, to the destination, via the plurality of egress ports, based on the network capacity.

LIVE SOCKET REDIRECTION
20230164235 · 2023-05-25 ·

Networking methods and systems include determining a first state of a connection on a first network based on connection buffers at a host. A first system call relating to the connection is identified. A next state of the connection that would result from the first system call is determined. The first system call is executed responsive to a determination that the next state does not move the connection farther from a safe transition state.