H04L45/74591

SCALABLE NETWORK PROCESSING SEGMENTATION
20210392167 · 2021-12-16 ·

A method for processing network communications, the method including receiving a network packet at a network device and performing at least one lookup for the packet in one or more first lookup tables in which the one or more first lookup tables are programmed to include at least one of an exact match or longest prefix match (LPM) table entry. The method includes obtaining a security source segment and a security destination segment based upon the result of the at least one lookup for the packet in the one or more first lookup tables. The method further includes performing a lookup in a second lookup table based upon the security source segment and security destination segment in which the second lookup table is programmed in a content addressable memory. Based upon the result of the lookup in the second lookup table, processing a forwarding decision for the packet according to the security source segment and security destination segment.

Layer 2 channel selection
11196671 · 2021-12-07 · ·

In an example, there is disclosed a network switch or other computing apparatus comprising: an ingress interface; a plurality of egress interfaces; and one or more logic elements, including at least a content addressable memory (CAM), comprising a channel selection engine to provide persistent channel selection comprising: receive a packet on the ingress interface; inspect a layer 2 (L2) attribute of the packet; lookup the L2 attribute in the CAM; and assign the packet to an egress interface communicatively coupled to a network service.

SYSTEM AND METHOD FOR LOW LATENCY NETWORK SWITCHING
20220191144 · 2022-06-16 · ·

A network switch and associated method of operation for establishing a low latency transmission path through the network which bypasses the packet queue and scheduler of the switch fabric. The network switch transmits each of a plurality of data packets to the identified destination egress port over the low latency transmission if the data packet is identified to be transmitted over the low latency transmission path from the ingress port to the destination egress port, and transmits the data packet to the destination egress port through the packet queue and scheduler if the data packet is not identified to be transmitted over the low latency transmission path from the ingress port to the destination egress ports.

Systems and methods for intelligent routing and content placement in information centric networks
11363116 · 2022-06-14 · ·

A content caching system enables an NDN network to place content closer to each end user(s) and to provide an explicit path for the target end user(s) to that content for better performance just in advance of users' anticipated request(s). The apparatus includes NDN routers and SDN controller employing a content commander, at least a content placement agent and at least one content analysis agent.

Exact match and ternary content addressable memory (TCAM) hybrid lookup for network device
11362948 · 2022-06-14 · ·

In a network device, a hash calculator generates a lookup hash value from data fields associated with a packet received by the network device. A compressed lookup key generator generates a compressed lookup key for the packet using the lookup hash value. A content addressable memory (CAM) stores compressed patterns corresponding to compressed lookup keys, uses the compressed lookup key received from the compressed lookup key generator to determine if the received compressed lookup key matches any stored compressed patterns, and outputs an index corresponding to a stored compressed pattern that matches the compressed lookup key. A memory stores uncompressed patterns corresponding to the compressed patterns stored in the CAM, and retrieves an uncompressed pattern using the index output by the CAM. A comparator generate a signal that indicates whether the uncompressed pattern retrieved from the memory matches the data fields associated with the packet.

IDENTIFYING COMPONENTS FOR REMOVAL IN A NETWORK CONFIGURATION

Systems, methods, and computer-readable media analyzing memory usage in a network node. A network assurance appliance may be configured to determine a hit count for a concrete level rule implemented on a node and identify one or more components of a logical model, wherein each of the one or more components are associated with the concrete level rule. The network assurance appliance may attribute the hit count for the concrete level rule to each of the components of the logical model, determine a number of hardware level entries associated with the each of the one or more components, and generate a report comprising the one or more components of the logical model, the hit count attributed to each of the one or more components of the logical model, and the number of hardware level entries associated with the one or more components of the logical model.

FLOWLET SCHEDULER FOR MULTICORE NETWORK PROCESSORS
20230275689 · 2023-08-31 ·

Systems and methods of using a packet order work scheduler (POWS) to assign packets to a set of scheduler queues for supplying packets to parallel processing units. A processing unit and the associated scheduler queue are dedicated to a specific flow until a queue-reallocation event, which may correspond to the associated scheduler queue being idle for at least a certain interval as indicated by its age counter, or the queue being the least recently used, when a new flow arrives. In this case, the scheduler queue and the associated processing unit may be reallocated to the new flow and disassociated with the previous flow. As a result, dynamic packet workload balancing can be advantageously achieved across the multiple processing paths.

Management of unreachable openflow rules

Methods and systems are provided. A method includes managing, by a software defined network (SDN) controller, OpenFlow rules stored on an OpenFlow network device having a ternary content addressable memory (TCAM). The OpenFlow rules include unreachable OpenFlow rules and reachable OpenFlow rules. The managing step includes querying at least one OpenFlow rule from among the unreachable OpenFlow rules and the reachable OpenFlow rules on the at least one OpenFlow network device. The managing step further includes determining whether any of the OpenFlow rules are reachable or unreachable from indicia used to mark the OpenFlow rules as reachable or unreachable. The managing step also includes causing a removal of the unreachable OpenFlow rules from the OpenFlow network device.

TCAM-based load balancing on a switch

In an example, a network switch is configured to operate natively as a load balancer. The switch receives incoming traffic on a first interface communicatively coupled to a first network, and assigns the traffic to one of a plurality of traffic buckets. This may include looking up a destination IP of an incoming packet in a fast memory such as a ternary content-addressable memory (TCAM) to determine whether the packet is directed to a virtual IP (VIP) address that is to be load balanced. If so, part of the source destination IP address may be used as a search tag in the TCAM to assign the incoming packet to a traffic bucket or IP address of a service node.

Interconnect address based QoS regulation

In various implementations, provided are systems and methods for an integrated circuit including a completer device, a requester device, and an interconnect fabric. The requester device is configured to generate transactions to the completer device, where each transaction includes a request packet that includes an attribute associated with the completer device; and the interconnect fabric is coupled to the requester device and the completer device. The integrated circuit can also include a QoS regulator configured to identify, based on a first attribute associated with the completer device, a first QoS value establishing a first priority level for a first request packet generated by the requester device, and modify the first request packet to include the first QoS value.