Patent classifications
H04L45/742
ADAPTING FORWARDING DATABASE LEARNING RATE BASED ON FILL LEVEL OF FORWARDING TABLE
A packet processor of a network device repeatedly determines a fill level of a forwarding table that is populated with associations between network addresses and network interfaces of, or coupled to, the network device. The packet processor adjusts, based on the fill level of the forwarding table, a maximum rate according to which the packet processor is permitted to send messages to a central processing unit (CPU) coupled to the packet processor, the messages indicating network addresses that are to be stored in the forwarding table by the CPU. The packet processor of the network device receives packets via network links coupled to the network device; identifies new network addresses of the packets that are not in the forwarding table; and sends messages to the CPU at a rate that does not exceed the maximum rate, the messages indicating the new network addresses are to be added to the forwarding table.
SYSTEM AND METHOD FOR DIRECT STORAGE ACCESS IN A CONTENT-CENTRIC NETWORK
One embodiment of the present invention provides a system for caching content data to a storage device attached to a node in a content-centric network (CCN). During operation, the system receives a content packet; forwards the content packet to an incoming port of an interest in the content packet; caches a copy of the content packet in the attached storage device, assembles a query; which includes at least a network header and an address associated with storage blocks at which the cached copy of the content packet is located, corresponding to the content packet; and stores the query in a cache table, thereby facilitating a subsequent direct access to the storage blocks using the assembled query to retrieve the copy of the content packet.
INFORMATION PROCESSING DEVICE, CONTROL METHOD, AND STORAGE MEDIUM
An information processing device includes a field programmable gate array configured to store route information in flow control, and forward packets according to the route information; one or more memories configured to store a flow cache that includes at least a part of the route information; and one or more processors coupled to the one or more memories and the one or more processors configured to divide the route information into a plurality of division areas; and acquire hit information extracted from each of the entries in a first division area of the plurality of division areas to delete a part of entries of the flow cache stored in the one or more memories, the first division area including flows whose number is greater than a threshold value.
ROUTE INFORMATION STORAGE METHOD AND APPARATUS
This application discloses a route information storage method. The method is applied to a wireless mesh network, the wireless mesh network includes a first node, a second node, and at least two stations STAs, the first node is an upper-level node of the second node, the at least two STAs include a first STA and a second STA, and the first STA and the second STA are connected to the second node. In the method, the first node receives a routing request for access requested by the first STA, and if it is determined that a first route entry corresponding to the second STA already exists, the first node no longer generates a new route entry for the first STA, but reuses the first route entry.
MANAGING TUNNEL INTERFACE SELECTION BETWEEN GATEWAYS IN A COMPUTING ENVIRONMENT
Described herein are systems, methods, and software to manage the selection of an edge gateway or edge for processing a packet. In one implementation, a first edge may receive a packet and hash addressing information in the packet to select a second edge to process the packet. The first edge may further forward the packet to the second edge, permitting the second edge to process the packet. Once processed, the second edge may forward the packet to a destination host computing system and notify the host computing system to use the second edge for response packets directed at a source internet protocol (IP) address in the packet.
DATA FLOW TABLE, METHOD AND DEVICE FOR PROCESSING DATA FLOW TABLE, AND STORAGE MEDIUM
Disclosed are a data flow table for high-speed large-scale concurrent data flows, method and apparatus for processing the data flow table, and a storage medium. The method includes: acquiring, according to a data flow identifier of a data flow to be inserted, addresses of candidate buckets in a fingerprint table and a data flow fingerprint; detecting whether there is an idle unit in the candidate buckets in the fingerprint table, if there is an idle unit, selecting the candidate bucket having the idle unit as target bucket; and sequentially searching for the idle unit from bottom unit to top unit of the target bucket, and when the idle unit is found in the target bucket, writing data flow fingerprint of said data flow into the found idle unit, and writing data flow record of said data flow into a recording bucket corresponding to the candidate bucket.
Datapath for multiple tenants
A novel design of a gateway that handles traffic in and out of a network by using a datapath pipeline is provided. The datapath pipeline includes multiple stages for performing various data-plane packet-processing operations at the edge of the network. The processing stages include centralized routing stages and distributed routing stages. The processing stages can include service-providing stages such as NAT and firewall. The gateway caches the result previous packet operations and reapplies the result to subsequent packets that meet certain criteria. For packets that do not have applicable or valid result from previous packet processing operations, the gateway datapath daemon executes the pipelined packet processing stages and records a set of data from each stage of the pipeline and synthesizes those data into a cache entry for subsequent packets.
Communication of policy changes in LISP-based software defined networks
Systems, methods, and computer-readable media for communicating policy changes in a Locator/ID Separation Protocol (LISP) based network deployment include receiving, at a first routing device, a first notification from a map server, the first notification indicating a change in a policy for LISP based communication between at least a first endpoint device and at least a second endpoint device, the first endpoint device being connected to a network fabric through the first routing device and the second endpoint device being connected to the network fabric through a second routing device. The first routing device forwards a second notification to the second routing device if one or more entries of a first map cache implemented by the first routing device are affected by the policy change, the second notification indicating a set of one or more endpoints connected to the second routing device that are affected by the policy change.
NETWORK SWITCH WITH AUTOMATED PORT PROVISIONING
In various embodiments systems and methods for managing a network switch, such as for a VLAN is disclosed. In one example, a method includes responsive to a restart of a port of a network switch, obtaining by the network switch a current policy applied to the port, determining based on a parameter associated with the current policy, to apply a default policy to the port, determining a new policy for the port by: obtaining an identifier for a device associated with the port, obtaining a key based on the identifier, the key associated with a plurality of devices of the same type as the device, and determining the new policy for the port using an association between the key and the new policy stored locally at the network switch, and applying the new policy to the port.
SOURCE ROUTING APPARATUS AND METHOD IN ICN
Disclosed herein a source routing apparatus and method in ICN. The method includes: receiving an interest packet; extracting a current entry value when the received interest packet includes a forwarding hint; using the extracted current entry value as an index of a path list; extracting a name of the interest packet; reducing the current entry value when the interest packet is transmitted to a network area of the path list; performing a FIB lookup with the extracted name; determining an output port using the FIB lookup; and transmitting the interest packet to the output port.