H04L12/745

Apparatus and method of managing content name in information-centric networking

The present invention relates to a method and a device for fast forwarding a flow traffic in an information-centric networking (ICN). According to the present invention, a flow switching method for a network node of an information-centric networking (ICN) includes: identifying a data name contained in a received interest packet and identifying a flow name from the data name, thereby determining whether a flow entry corresponding to the identified flow name is present within a flow table (FT); identifying forwarding information base (FIB) entry information matched to the flow name from the corresponding flow entry and identifying an FIB entry corresponding to the FIB entry information; and transmitting the interest packet on the basis of interface information included in the identified FIB entry.

Method and apparatus for longest prefix match search

A network device includes a memory configured to store a plurality of entries in respective locations in the memory, the plurality of entries corresponding to a trie data structure for performing a longest prefix match search. The network device also includes: a memory access engine configured to retrieve from a location in the memory, in a single memory lookup operation, i) longest prefix match information for a node corresponding to a network address in a header of a packet, and ii) pointer information that indicates a child node in the trie data structure. The network device also includes: a child node address calculator configured to use i) the longest prefix match information, and ii) the pointer information, to calculate a memory address of another location in the memory corresponding to the child node.

SELECTIVE ROUTE DOWNLOAD TRAFFIC SAMPLING
20210344597 · 2021-11-04 ·

A network device includes a forwarding information base (FIB). The FIB includes a first number of entries and a default entry. The network device includes a routing information base that includes a second number of entries. The network device includes a FIB entry optimizer that ranks a first portion of the second number of entries based on access information of the first number of entries; ranks a second portion of the second number of entries based on access information of the default entry; and updates at least one entry of the FIB based on the ranks of the first portion of the second number of entries and the ranks of the second portion of the second number of entries. The first number of entries is less than the second number of entries.

Prefix-based fat flows

A network device includes one or more processors configured to use a fat flow rule that specifies at least one of a mask to be applied to source Internet protocol (IP) addresses or to destination IP addresses, or that source ports or destination ports are to be ignored. The one or more processors may further be configured to receive packets having different source or destination IP addresses and/or different source or destination ports, and nevertheless assign the packets to the same fat flow according to the fat flow rule, e.g., by masking the source or destination IP addresses and/or ignoring the source or destination ports of the packets. In this manner, the network device may aggregate two or more different flows into a single fat flow.

Multicast traffic in virtual private networks

In one embodiment, a method is provided. The method includes determining that a network device should use an underlay multicast group associated with an overlay multicast group for multicast traffic. The underlay multicast group carries multicast traffic for the overlay multicast group. The overlay multicast group is associated with a virtual private network. The method also includes determining an underlay multicast group address for the underlay multicast group. The overlay multicast group is associated with an overlay multicast group address. A first portion of the underlay multicast group address is a function of the overlay multicast group address. The method further includes forwarding one or more multicast packets to one or more multicast receivers via the underlay multicast group using the underlay multicast group address.

Populating capacity-limited forwarding tables in routers to maintain loop-free routing
20210336877 · 2021-10-28 ·

A router includes a plurality of ports interconnected to one or more Customer Edge (CE) nodes and one or more Provider Edge (PE) nodes; and memory storing a forwarding table of routes, wherein the routes in the forwarding table are installed automatically based on static or Interior Gateway Protocol (IGP)-learned default routes, connected routes, Border Gateway Protocol (BGP) routes learned from peers, and routes in an Internet routing table, and wherein a number of the routes installed in the forwarding table is less than a number of routes in the Internet routing table. The number of routes in the Internet routing table exceeds a capacity of the memory, and the routes installed in the forwarding table ensure a loop-free topology. The routes installed in the forwarding table can include all of the BGP routes learned from peers plus longer prefix matches from the routes in the Internet routing table.

SYSTEMS FOR PROVIDING AN LPM IMPLEMENTATION FOR A PROGRAMMABLE DATA PLANE THROUGH A DISTRIBUTED ALGORITHM

Described are programmable IO devices comprising: an MPU and a memory unit. The MPU comprising at least one ALU. The memory unit having instructions stored thereon which, when executed by the respective programmable IO device, cause the programmable IO device to perform operations. These operations comprise: receiving, from an inbound interface, a packet comprising packet data for at least one range-based element; determining, via the MPU, a lookup result by performing a modified binary search on an interval binary search tree with the packet data to determine a LPM, wherein the interval binary search tree maps the at least one range-based element to an associated data element; and classifying the packet based on the lookup result.

Delivering content over a network

A method of delivering content in one or more packets over a network is described. A content request packet comprising a request for content based on a first IPv6 address is received, the first IPv6 address identifying the content. The first IPv6 address is mapped to a second IPv6 address, the second IPv6 address being associated with the content at a physical location. The content requested in the content request packet is then received from the physical location associated with the second IPv6 address for delivery to a user. A further method includes routing a packet for requesting the content from a client to a content server storing an instant of the content, based on an IPv6 address of content being requested by the client. A communication session is then set up between the client and the content server; and the requested content is transmitted from the content server.

Hierarchical geographic naming associated to a recursively subdivided geographic grid referencing

Embodiments described herein provide a system for facilitating hierarchical geographic naming. During operation, the system receives a service request comprising location information associated with a requesting device and determines a hierarchical name corresponding to the location information. The hierarchical name can include a plurality of name segments. A respective name segment of the plurality of name segments can correspond to a recursively subdivided grid of geographic grid referencing. The system then performs a recursive search using the hierarchical name for a service requested by the service request.

Multi-stage prefix matching enhancements
11140078 · 2021-10-05 · ·

Approaches, techniques, and mechanisms are disclosed for maintaining efficient representations of prefix tables for utilization by network switches and other devices. In an embodiment, the performance of a network device is greatly enhanced using a working representation of a prefix table that includes multiple stages of prefix entries. Higher-stage prefixes are stored in slotted pools. Mapping logic, such as a hash function, determines the slots in which a given higher-stage prefix may be stored. When trying to find a longest-matching higher-stage prefix for an input key, only the slots that map to that input key need be read. Higher-stage prefixes may further point to arrays of lower-stage prefixes. Hence, once a longest-matching higher-stage prefix is found for an input key, the longest prefix match in the table may be found simply by comparing the input key to lower-stage prefixes in the array that the longest-matching higher-stage prefix points to.