Patent classifications
H04L45/74591
Packet classification using fingerprint hash table
A key is descriptive of a data packet, and a fingerprint hash function is applied to such a key to generate a fixed length fingerprint of the key. An index value is determined based on a portion of the fingerprint. A hash table could be populated by storing in a memory, at a memory location associated with the index value: a remainder of the fingerprint other than the portion of the fingerprint that was used to determine the index value, to indicate that data packets consistent with the key are to be handled in accordance with packet handling metadata. During packet processing, if a memory location associated with an index value stores a remainder of the fingerprint other than the portion of the fingerprint that was used to determine the index value, a data packet is handled according to packet handling metadata associated with the fingerprint.
Content addressable memory (CAM) based hardware architecture for datacenter networking
A communication protocol system is provided for reliable transport of packets. A content addressable memory hardware architecture including a reorder engine and a retransmission engine may be utilized for the reliable transport of the packets. In this regard, a reorder engine includes a content addressable memory (CAM) and one or more processors in communication with the CAM. The one or more processors are configured to receive a first set of data packets when executed by the one or more processors. The one or more processors are configured to access the content addressable memory to process the first set of data packets. The one or more processors are configured to save data information of the first set of the data packets in the content addressable memory.
HYBRID WILDCARD MATCH TABLE
Embodiments of the present invention are directed to a wildcard matching solution that uses a combination of static random access memories (SRAMs) and ternary content addressable memories (TCAMs) in a hybrid solution. In particular, the wildcard matching solution uses a plurality of SRAM pools for lookup and a spillover TCAM pool for unresolved hash conflicts.
CHAINED LOOKUPS AND COUNTING IN A NETWORK SWITCH
A network switch using a search engine to generate chained table lookup requests. After the search engine executes a first lookup, the next-pass logic in the search engine uses the first lookup result and information in the master key to generate a second lookup key as well as other parts of a second lookup request. A next-pass crossbar routes the second lookup request to a target memory, and the search logic executes the second lookup. The first lookup request may originate from a processing engine coupled to the search engine. The first and the second lookup results, if any, can then be returned back to the processing engine for further processing or decision making. The chain of lookups can be configured by software by specifying various operational parameters of the processing engines and the next-pass logic, including specifying a key construction mode for the second lookup.
Dynamic allocation of memory for packet processing instruction tables in a network device
A method for operating a network device, having data storage with selectably modifiable capacity for storing instructional data for a packet processing operation, includes detecting a need for additional storage for the instructional data, allocating an additional memory block without interrupting operation of the network device, associating with the additional memory block an additional address hashing function, different from each of at least one respective previous address hashing function associated with any previously-allocated memory block. Each respective previous address hashing function transforms a look-up key into a respective addressable location in a previously-allocated memory block, and the additional address hashing function transforms the look-up key into an addressable location in the additional memory block. When a block is deallocated, each unit of instructional data is reprocessed through the hashing function of a different block to which the unit of the instructional data will be moved.
SYSTEMS FOR BUILDING DATA STRUCTURES WITH HIGHLY SCALABLE ALGORITHMS FOR A DISTRIBUTED LPM IMPLEMENTATION
Described are programmable IO devices configured to perform operations. These operations comprise: determining a set of range-based elements for a network; sorting the set of range-based elements according to a global order among the range-based elements; generating an interval table from the sorted range-based elements; generating an interval binary search tree from the interval table; propagating data stored in subtrees of interior stages of the interval binary search tree to subtrees of a last stage of the interval binary search tree such that the interior stages do not comprise data; converting the interval binary search tree to a Pensando Tree; compressing multiple levels of the Pensando Tree into cache-lines; and assembling the cache-lines in the memory unit such that each stage can compute an address of a next-cache line to be fetched by a next stage.
Systems and methods for extending internal endpoints of a network device
An integrated circuit (IC) device includes a network device. The network device includes first and second network ports each configured to connect to a network, and an internal endpoint port configured to connect to first endpoint having a first processing unit and second endpoint having a second processing unit. A lookup circuit is configured to provide a first forwarding decision for a first frame to be forwarded to the first endpoint. An endpoint extension circuit is configured to determine a first memory channel based on the first forwarding decision for forwarding the first frame, and forward the first frame to the first endpoint using the determined memory channel.
Delivering content over a network
A method of delivering content in one or more packets over a network is described. A content request packet comprising a request for content based on a first IPv6 address is received, the first IPv6 address identifying the content. The first IPv6 address is mapped to a second IPv6 address, the second IPv6 address being associated with the content at a physical location. The content requested in the content request packet is then received from the physical location associated with the second IPv6 address for delivery to a user. A further method includes routing a packet for requesting the content from a client to a content server storing an instant of the content, based on an IPv6 address of content being requested by the client. A communication session is then set up between the client and the content server; and the requested content is transmitted from the content server.
System and method for monitoring logical network traffic flows using a ternary content addressable memory in a high performance computing environment
System and method for monitoring logical network traffic flows using a ternary content addressable memory (TCAM). An exemplary embodiment can provide a network port that is associated with a TCAM. The TCAM can be configured with a plurality of entries, wherein each TCAM entry contains a value. Further, each TCAM entry can be associated with at least one network counter. A predefined set of values can be retrieved from at least one header field of a data packet processed by the network port. Each value in the predefined set of values can be aggregated into a search value, and the search value can be compared to the value contained in each TCAM entry. When a match is found between the search value and the value contained in a TCAM entry, each network counter associated with the matching TCAM entry can be incremented.
Multi-stage prefix matching enhancements
Approaches, techniques, and mechanisms are disclosed for maintaining efficient representations of prefix tables for utilization by network switches and other devices. In an embodiment, the performance of a network device is greatly enhanced using a working representation of a prefix table that includes multiple stages of prefix entries. Higher-stage prefixes are stored in slotted pools. Mapping logic, such as a hash function, determines the slots in which a given higher-stage prefix may be stored. When trying to find a longest-matching higher-stage prefix for an input key, only the slots that map to that input key need be read. Higher-stage prefixes may further point to arrays of lower-stage prefixes. Hence, once a longest-matching higher-stage prefix is found for an input key, the longest prefix match in the table may be found simply by comparing the input key to lower-stage prefixes in the array that the longest-matching higher-stage prefix points to.