H04L45/74591

Flexible processing of network packets

Aspects of this disclosure describes techniques for parsing network packets, processing network packets, and modifying network packets before forwarding the modified network packets over a network. The present disclosure describes a system that, in some examples, parses network packets, generates data describing or specifying attributes of the network packet, identifies operations to be performed when processing a network packet, performs the identified operations, generates data describing or specifying how to modify and/or forward the network packet, modifies the network packet, and outputs the modified packet to another device or system, such as a switch.

Router methods and apparatus for managing memory for network overlay routes with fallback route support prioritization

A method and a router device for managing memory for network overlay routes with fallback route support prioritization may be provided. A network overlay route as a candidate network overlay route may be obtained at a router for storage in a memory. The memory may store a plurality of network overlay routes for forwarding user plane traffic in a network. An assessment for storage of the candidate network overlay route based on a priority level indicator of the candidate network overlay route may be performed. The priority level indicator may be indicative of a fallback route support level of the candidate network overlay route in the router. Based on the assessment, at least one of the following may be performed: adding the candidate network overlay route to the memory and refraining from adding the candidate network overlay route to the memory.

Techniques for reducing the overhead of providing responses in a computing network

An endpoint in a network may make posted or non-posted write requests to another endpoint in the network. For a non-posted write request, the target endpoint provides a response to the requesting endpoint indicating that the write request has been serviced. For a posted write request, the target endpoint does not provide such an acknowledgment. Hence, posted write requests have lower overhead, but they suffer from potential synchronization and resiliency issues. While non-posted write requests do not have those issues, they cause increased load on the network because such requests require the target endpoint to acknowledge each write request. Introduced herein is a network operation technique that uses non-posted transactions while maintaining a load overhead of the network as a manageable level. The introduced technique reduces the load overhead of the non-posted write requests by collapsing and reducing a number of the responses.

SELECTIVELY CONNECTABLE CONTENT-ADDRESSABLE MEMORY
20210266260 · 2021-08-26 ·

A switching system includes a content-addressable memory (CAM) and several processing nodes. The CAM can be selectively connected to any one or more of the processing nodes during operation of the switching system, without having to power down or otherwise reboot the switching system. The CAM is selectively connected to a processing node in that electrical paths between the CAM and the processing nodes can be established, torn down, and re-established during operation of the switching system. The switching system can include a connection matrix to selectively establish electrical paths between the CAM and the processing nodes.

CONTENT ADDRESSABLE MEMORY WITH SUB-FIELD MINIMUM AND MAXIMUM CLAMPING
20210181973 · 2021-06-17 ·

A processing system includes a content addressable memory (CAM) in an input/output path to selectively modify register writes on a per-pipeline basis. The CAM compares an address of a register write to an address field of each entry of the CAM. If a match is found, the CAM modifies the register write data as defined by a function for the matching entry of the CAM. In some embodiments, each entry of the CAM includes a data mask defining subfields of the register write data, wherein each subfield includes subfield data including one or more bits.

METHOD AND SYSTEM FOR PROPAGATING NETWORK TRAFFIC FLOWS BETWEEN END POINTS BASED ON SERVICE AND PRIORITY POLICIES

A method and system for propagating network traffic flows between end points based on service and priority policies. Specifically, the method and system disclosed herein entail configuring network elements with network-disseminated traffic management policies. Each traffic management policy guides the handling of a network traffic flow between origination and termination end points (i.e., source and destination hosts), which may be defined through data link layer, network layer, and/or transport layer header information, as well as group assignment information, associated with the source and destination hosts.

Address translation for external network appliance

Systems, methods, and computer-readable media relate to providing a network management service. A system is configured to request first network information from a first component of a network using a public IP address for the first component, wherein the first network information includes private IP addresses for a second component in the network and translate, based on a mapping information for a private IP address space to a public IP address space, the private IP address for a second component to a public IP address for the second component. The system is further configured to request second network information from the second component using the public IP address and provide a network management service for the network based on the second network information.

Overlay network hardware service chaining
11025539 · 2021-06-01 · ·

Presented herein are techniques to support service chains in a network, such as a spine-leaf switch fabric network, that also uses overlay networking technology. More specifically, in accordance with the techniques presented herein, a linecard at an ingress network node for an overlay network is configured to receive a packet. Using information obtained from the packet, a hardware lookup is performed at the linecard to identify a service chain with which the packet is associated. An entry corresponding to the identified service chain is identified within a memory location of the linecard, where the entry includes overlay network information for forwarding packets along the identified service chain via an overlay network. Using the overlay network information included in the identified entry, the packet is encapsulated with an overlay packet header for use in forwarding the packet via the overlay network.

FLOW MONITORING IN NETWORK DEVICES
20210160184 · 2021-05-27 ·

Flow state information that is stored in a first memory among a plurality of memories for maintaining flow state information at a network device is updated based on packets ingressing the network device. The memories are arranged in a hierarchical arrangement in which memories at progressively higher levels of hierarchy are configured to maintain flow state information corresponding to progressively larger sets of flows processed by the network device. When it is determined that a fullness level of the first memory exceeds a first threshold, flow state information associated with at least one flow, among a first set of flows for which flow state information is currently being maintained in the first memory, is transferred from the first memory to a second memory, the second memory being at a higher hierarchical level than the first memory. A new flow is instantiated in space freed up in the first memory.

Packet processing match and action unit with configurable memory allocation

A packet processing block. The block comprises an input for receiving data in a packet header vector, the vector comprising data values representing information for a packet. The block also comprises circuitry for performing packet match operations in response to at least a portion of the packet header vector and data stored in a match table, and circuitry for performing one or more actions in response to a match detected by the circuitry for performing packet match operations and according to information stored in an action table. Each of said match table and said action table comprise one or more memories selected from a pool of unit memories, wherein each memory in the pool of unit memories is configurable to operate as either a match memory or an action memory.