H04L49/1546

MANAGED SWITCH ARCHITECTURES: SOFTWARE MANAGED SWITCHES, HARDWARE MANAGED SWITCHES, AND HETEROGENEOUS MANAGED SWITCHES
20200396130 · 2020-12-17 · ·

Some embodiments provide a system that includes a set of network controllers for receiving definitions of first and second logical switching elements. The system includes several managed switching elements. The set of network controllers configure the several managed switching elements to implement the defined first and second logical switching elements. The system includes several network hosts that are each (1) communicatively coupled to one of the several managed switching elements and (2) associated with one of the first and second logical switching elements. Network data communicated between network hosts associated with the first logical switching element are isolated from network data communicated between network hosts associated with the second logical switching element.

High-performance data repartitioning for cloud-scale clusters

Techniques herein partition data using data repartitioning that is store-and-forward, content-based, and phasic. In embodiments, computer(s) maps network elements (NEs) to grid points (GPs) in a multidimensional hyperrectangle. Each NE contains data items (DIs). For each particular dimension (PD) of the hyperrectangle the computers perform, for each particular NE (PNE), various activities including: determining a linear subset (LS) of NEs that are mapped to GPs in the hyperrectangle at a same position as the GP of the PNE along all dimensions of the hyperrectangle except the PD, and data repartitioning that includes, for each DI of the PNE, the following activities. The PNE determines a bit sequence based on the DI. The PNE selects, based on the PD, a bit subset of the bit sequence. The PNE selects, based on the bit subset, a receiving NE of the LS. The PNE sends the DI to the receiving NE.

PROTOCOL INDEPENDENT PROGRAMMABLE SWITCH (PIPS) FOR SOFTWARE DEFINED DATA CENTER NETWORKS

A software-defined network (SDN) system, device and method comprise one or more input ports, a programmable parser, a plurality of programmable lookup and decision engines (LDEs), programmable lookup memories, programmable counters, a programmable rewrite block and one or more output ports. The programmability of the parser, LDEs, lookup memories, counters and rewrite block enable a user to customize each microchip within the system to particular packet environments, data analysis needs, packet processing functions, and other functions as desired. Further, the same microchip is able to be reprogrammed for other purposes and/or optimizations dynamically.

Queue scheduler control via packet data

Some embodiments provide a method for a hardware forwarding element that includes multiple queues. The method receives a packet at a multi-stage processing pipeline of the hardware forwarding element. The method determines, at one of the stages of the processing pipeline, to modify a setting of a particular one of the queues. The method stores an identifier for the particular queue and instructions to modify the queue setting with data passed through the processing pipeline for the packet. The stored information is subsequently used by the hardware forwarding element to modify the queue setting.

NIC with programmable pipeline

A network interface controller that is connected to a host and a packet communications network. The network interface controller includes electrical circuitry configured as a packet processing pipeline with a plurality of stages. It is determined in the network interface controller that at least a portion of the stages of the pipeline are acceleration-defined stages. Packets are processed in the pipeline by transmitting data to an accelerator from the acceleration-defined stages, performing respective acceleration tasks on the transmitted data in the accelerator, and returning processed data from the accelerator to receiving stages of the pipeline.

Streaming editor circuit for implementing a packet deparsing process

Apparatus and associated methods relating to data packet deparsing include an editing circuit configured to perform one or more predetermined editing operations on headers of an incoming data packet step by step without extracting all headers from the incoming data packet. In an illustrative example, an editor circuit may include an updating circuit configured to receive the data packet and update a header in the data packet. The editor circuit may also include a removal circuit configured to remove a header from the data packet. The editor circuit may also include an insertion circuit configured to insert one or more consecutive headers to the data packet. A state machine may be configured to enable or disable the updating circuit, the removal circuit, and/or the insertion circuit based on the predetermined editing operations. By using the editing circuit, packet deparsing may be performed with less hardware resources and low latency.

Packet processing architecture and method therefor
10826982 · 2020-11-03 · ·

A packet processing architecture includes a plurality of packet processing stages, wherein at least one of the packet processing stages includes multiple next processing stage modules that are operably coupled to respective further processing stages, wherein the multiple next processing stage modules are dynamically configurable.

Openflow match and action pipeline structure

An embodiment of the invention includes a packet processing pipeline. The packet processing pipeline includes match and action stages. Each match and action stage in incurs a match delay when match processing occurs and each match and action stage incurs an action delay when action processing occurs. A transport delay occurs between successive match and action stages when data is transferred from a first match and action stage to a second match and action stage.

IDENTIFYING CONGESTION IN A NETWORK

Some embodiments of the invention provide a method for reporting congestion in a network that includes several forwarding elements. In a data plane circuit of one of the forwarding elements, the method detects that a queue in the switching circuit of the data plane circuit is congested, while a particular data message is stored in the queue as it is being processed through the data plane circuit. In the data plane circuit, the method then generates a report regarding the detected queue congestion, and sends this report to a data collector external to the forwarding element. To send the report, the data plane circuit in some embodiments duplicates the particular data message, stores it in the duplicate data message information regarding the detected queue congestion, and sends the duplicate data message to the external data collector.

Traffic manager resource sharing

A traffic manager is shared amongst two or more egress blocks of a network device, thereby allowing traffic management resources to be shared between the egress blocks. Among other aspects, this may reduce power demands and allow a larger amount of buffer memory to be available to a given egress block that may be experiencing high traffic loads. Optionally, the shared traffic manager may be leveraged to reduce the resources required to handle data units on ingress. Rather than buffer the entire unit in the ingress buffers, an arbiter may be configured to buffer only the control portion of the data unit. The payload of the data unit, by contrast, is forwarded directly to the shared traffic manager, where it is placed in the egress buffers. Because the payload is not being buffered in the ingress buffers, the ingress buffer memory may be greatly reduced.