H04L45/74591

PREEMPTIVE CACHING OF CONTENT IN A CONTENT-CENTRIC NETWORK
20200314008 · 2020-10-01 ·

Preemptive caching within content/name/information centric networking environment is contemplated. The preemptively caching may be performed within content/name/information centric networking environments of the type having a branching structure or other architecture sufficient to facilitate routing data, content, etc. such that one or more nodes other than a node soliciting a content object also receive the content object.

FLEXIBLE SCHEME FOR ADDING RULES TO A NIC PIPELINE

Flexible schemes for adding rules to a NIC pipeline and associated apparatus. Multiple match-action tables are implemented in host memory of a platform defining actions to be taken for matching packet flows. A packet processing pipeline and an exact match (EM) cache is implemented on a network interface, such as a NIC, installed in the platform. A portion of the match-action entries in the host memory match-action tables are cached in the EM cache. Received packets are processed to generate a key that is used as a lookup for the EM cache. If a match is found, the action is taken. For a miss, the key is forwarded to the host software and the match-action tables are searched. For a match, the action is taken, and the entry is added to the EM cache. If no match is found, a new match-action entry is added to a match-action table. Aging-out mechanisms are used for the match-action tables and the EM cache. A multi-hash scheme is used to that supports a very large number of match-action entries.

Expanded Host Domains In PCIe Systems
20200301863 · 2020-09-24 · ·

Computing architectures, platforms, and systems are provided herein. In one example, a system is provided. The system includes a communication arrangement for peripheral component interconnect express (PCIe) traffic transferred over a communication fabric. The communication arrangement establishes an expanded address that provides a quantity of port identifiers to a host greater than indicated by a quantity of bits in a port field of the PCIe traffic, where the expanded address employs one or more bits of the PCIe traffic other than the port field. The communication arrangement detects a transfer among the PCIe traffic issued by the host having the expanded address corresponding to a destination. Based on the expanded address, the communication arrangement identifies routing information to route the transfer over the communication fabric to the destination.

Systems and methods of event-based content provisioning
10783445 · 2020-09-22 · ·

Systems and methods for automated content selection and/or distribution are disclosed herein. The system can include a packet selection system including a recommendation engine. The recommendation engine can select a next data packet that can include content for delivery to a user device. The system can include a presentation system including a presenter module. The presenter module can receive an indication of the selected next data packet and send the content for delivery to the user device via an electrical communication. The system can include a response system including a response processor. The response processor can receive a response from the user device, and the response processor can determine whether the received response is a desired response. The system can include a summary model system including model engine, and a messaging bus.

Multi-field classifier

A system and method for classifying packets according to packet header field values. Each of a set of subkey tables is searched for a respective packet header field value; each such search results in a value for a subkey. The subkeys are combined to form a decision key. A decision table is then searched for the decision key. The search of the decision table results in an action code and a reason code, one or both of which may be used to determine how to further process the packet.

Synthetic supernet compression
10778581 · 2020-09-15 · ·

One embodiment of the present invention sets forth a technique for compressing a forwarding table. The technique includes selecting, from a listing of network prefixes, a plurality of network prefixes that are within a range of a subnet. The technique further includes sorting the plurality of network prefixes to generate one or more subgroups of network prefixes and selecting a first subgroup of network prefixes included in the one or more subgroups of network prefixes. The technique further includes generating a synthetic supernet based on the first subgroup of network prefixes.

Variable TCAM actions
10778612 · 2020-09-15 · ·

Described herein are various embodiments of a network element comprising a network port to receive a unit of network data and a data plane coupled to the network port. In one embodiment the data plane includes a ternary content addressable memory (TCAM) module to compare a first set of bits in the unit of network data with a second set of bits in a key associated with a TCAM rule. The second set of bits includes a first subset of bits and a second subset of bits and the TCAM module includes first logic to compare one or more bits in the first set of bits against the second set of bits, and second logic to select an action or a result using bits from either the second subset of bits, from the unit of network data, or from meta data associated with the unit of network data. Other embodiments are also described.

DYNAMIC ALLOCATION OF MEMORY FOR PACKET PROCESSING INSTRUCTION TABLES IN A NETWORK DEVICE
20200287832 · 2020-09-10 ·

A method for operating a network device, having data storage with selectably modifiable capacity for storing instructional data for a packet processing operation, includes detecting a need for additional storage for the instructional data, allocating an additional memory block without interrupting operation of the network device, associating with the additional memory block an additional address hashing function, different from each of at least one respective previous address hashing function associated with any previously-allocated memory block. Each respective previous address hashing function transforms a look-up key into a respective addressable location in a previously-allocated memory block, and the additional address hashing function transforms the look-up key into an addressable location in the additional memory block. When a block is deallocated, each unit of instructional data is reprocessed through the hashing function of a different block to which the unit of the instructional data will be moved.

Systems and methods for intelligent routing and content placement in information centric networks
10771590 · 2020-09-08 · ·

A content caching system enables an NDN network to place content closer to each end user(s) and to provide an explicit path for the target end user(s) to that content for better performance just in advance of users' anticipated request(s). The apparatus includes NDN routers and SDN controller employing a content commander, at least a content placement agent and at least one content analysis agent.

Combining prefix lengths into a hash table

Examples herein disclose a smaller prefix length and a greater prefix length which are identified from a routing table of various prefix lengths. The smaller prefix length is converted into the greater prefix length. The converted prefix length and the greater prefix length are combined into a hash table.