H04L49/1546

Identifying congestion in a network

Some embodiments of the invention provide a method for reporting congestion in a network that includes several forwarding elements. In a data plane circuit of one of the forwarding elements, the method detects that a queue in the switching circuit of the data plane circuit is congested, while a particular data message is stored in the queue as it is being processed through the data plane circuit. In the data plane circuit, the method then generates a report regarding the detected queue congestion, and sends this report to a data collector external to the forwarding element. To send the report, the data plane circuit in some embodiments duplicates the particular data message, stores it in the duplicate data message information regarding the detected queue congestion, and sends the duplicate data message to the external data collector.

Deep fusing of Clos star networks to form a global contiguous web
20220116339 · 2022-04-14 ·

Access nodes of a large-scale network are arranged into a number of groups. The groups are arranged into a number of bands. Each distributor of a pool of distributors interconnects each access node of a selected group to at least one channel from each group of a selected band. A discipline of allocating the selected group and the selected band to a distributor ensures that each access node has: a number, approximately equal to half the number of groups, of parallel single-hop paths to each other access node of a same group; a number, approximately equal to half the number of bands, of parallel single-hop paths to each access node of a different group within a same band; and one single-hop path to each other access node of a different access band. To eliminate the need for cross connectors, geographically-spread distributors are arranged into geographically-spread constellations of collocated distributors.

Deep fusing of Clos star networks to form a global contiguous web
20220116339 · 2022-04-14 ·

Access nodes of a large-scale network are arranged into a number of groups. The groups are arranged into a number of bands. Each distributor of a pool of distributors interconnects each access node of a selected group to at least one channel from each group of a selected band. A discipline of allocating the selected group and the selected band to a distributor ensures that each access node has: a number, approximately equal to half the number of groups, of parallel single-hop paths to each other access node of a same group; a number, approximately equal to half the number of bands, of parallel single-hop paths to each access node of a different group within a same band; and one single-hop path to each other access node of a different access band. To eliminate the need for cross connectors, geographically-spread distributors are arranged into geographically-spread constellations of collocated distributors.

PACKET PROCESSING WITH HARDWARE OFFLOAD UNITS
20220103488 · 2022-03-31 ·

Some embodiments of the invention provide a method for configuring multiple hardware offload units of a host computer to perform operations on packets associated with machines (e.g., virtual machines or containers) executing on the host computer and to pass the packets between each other efficiently. For instance, in some embodiments, the method configures a program executing on the host computer to identify a first hardware offload unit that has to perform a first operation on a packet associated with a particular machine and to provide the packet to the first hardware offload unit. The packet in some embodiments is a packet that the particular machine has sent to a destination machine on the network, or is a packet received from a source machine through a network and destined to the particular machine.

Dynamic hardware forwarding pipeline compression

A controller device for a network provides data associated with pipeline capabilities of a programmable switch. The programmable switch receives data associated with pipeline capabilities of the programmable switch. The pipeline capabilities include a plurality of flow tables and allowable table transitions for each of the flow tables. The programmable switch determines that a first flow table and a second flow table are mutually independent based on the allowable table transitions for each of the flow tables. The programmable switch configures a pipeline for data flow in the computing device, the pipeline comprising a plurality of pipeline stages, a particular pipeline stage comprising the first flow table and the second flow table.

Efficient troubleshooting in SDN network

A method is implemented by a switch in a Software Defined Networking (SDN) network to trace packets belonging to a flow. The method includes setting a value in a first field and a second field associated with the packet to indicate that tracing is enabled for the packet, where the second field is a field that is not used for packet matching, determining, at a second flow table, whether tracing is enabled for the packet based on the value in the first field, transmitting a trace message for the packet to a trace collector in response to a determination that tracing is enabled for the packet, setting a value in the first field to indicate that tracing is disabled for the packet, resubmitting the packet to the second flow table, and copying the value in the second field to the first field before directing the packet to another flow table.

METHOD AND SYSTEM FOR CLASSIFYING DATA PACKET FIELDS ON FPGA
20210168062 · 2021-06-03 ·

A method and system for classifying data packet fields are disclosed. They associate a final tag to each of the fields in a data packet in relation to a set of classifying rules, and involve building a decision tree using a recursive algorithm to apply the set of classifying rules on the data packet fields, mapping each node of the built decision tree respectively to a processing element of a FPGA, each processing element comprising a processor and a memory, pipelining all mapped processing elements, and processing the data packet fields through the pipelined and mapped processing elements.

Methods and systems for network security universal control point

The present disclosure relates to handling of packet flows between a pair of network security zones in a communications network. A packet that is sent from one of the network security zones toward the other of the network security zones is directed to a packet processing service chain, based on a packet handling classification of a packet flow of which the packet is a part. The service chain has multiple identical service chain instances to perform a service on packets, and the packet is directed to one of the service chain instances within the service chain. A packet that is processed by any of the service chain instances is transmitted to the other network security zone.

QUEUE SCHEDULER CONTROL VIA PACKET DATA

Some embodiments provide a method for a hardware forwarding element that includes multiple queues. The method receives a packet at a multi-stage processing pipeline of the hardware forwarding element. The method determines, at one of the stages of the processing pipeline, to modify a setting of a particular one of the queues. The method stores an identifier for the particular queue and instructions to modify the queue setting with data passed through the processing pipeline for the packet. The stored information is subsequently used by the hardware forwarding element to modify the queue setting.

DYNAMIC HARDWARE FORWARDING PIPELINE COMPRESSION
20210218674 · 2021-07-15 ·

A controller device for a network provides data associated with pipeline capabilities of a programmable switch. The programmable switch receives data associated with pipeline capabilities of the programmable switch. The pipeline capabilities include a plurality of flow tables and allowable table transitions for each of the flow tables. The programmable switch determines that a first flow table and a second flow table are mutually independent based on the allowable table transitions for each of the flow tables. The programmable switch configures a pipeline for data flow in the computing device, the pipeline comprising a plurality of pipeline stages, a particular pipeline stage comprising the first flow table and the second flow table.