Patent classifications
H04L45/742
METHODS FOR UPDATING ROUTE, ACCESS DEVICE, AND CONVERGENCE DEVICE
Provided are methods and apparatuses for updating a route. In the present disclosure, when an access device receives a first host route and a first sequence number sent by a BGP peer of the access device, regardless of the value of the first sequence number, the first host route is used as a route used by the access device to forward data to a target host, and a recorded first ARP entry corresponding to the target host is deleted, so as to trigger the BGP peer of the access device, for example, a convergence device, to withdraw host routes associated with the first ARP entry based on the existing route withdrawal mechanism.
Prime re-shuffled assisted carp
Improving distribution of traffic from clients to servers is provided. A device intermediary to a plurality of clients and a plurality of servers can receive a request from a client of the plurality of clients to access one of the plurality of servers. The device can determine a hash value based on at least a portion of the request received from the client. The device can identify an index of a plurality of indices listing the plurality of servers repeated a plurality of times in a deterministic shuffled order. The device can apply a cache array routing protocol (CARP) algorithm to a second plurality of servers listed in a subset of the plurality of indices around the index. The device can select a server from the second plurality of servers with a highest hash value based on the application of the CARP algorithm.
AUTOMATED ROUTE PROPAGATION AMONG NETWORKS ATTACHED TO SCALABLE VIRTUAL TRAFFIC HUBS
Metadata indicating that a virtual traffic hub enabling connectivity between a plurality of isolated networks has been established is stored. A determination is made that a first entry of a first isolated network attached to the hub is to be represented in a second routing table of a second isolated network attached to the hub, e.g., to enable network packets originating at resources of the second isolated network to be transmitted via the hub to the first isolated network. A new entry corresponding to the first entry is included in the second routing table.
Centralized path computation for information-centric networking
This disclosure describes techniques for implementing centralized path computation for routing in hybrid information-centric networking protocols implemented as a virtual network overlay. A method includes receiving an interest packet header from a forwarding router node of a network overlay. The method further includes determining an interest path of the interest packet and one or more destination router nodes of the network overlay. The method further includes computing one or more paths over the network overlay. The method further includes determining an addressing method for the one or more computed paths over the network overlay. The method further includes performing at least one of encoding each computed path in a data packet header, and encoding each computed path as state entries of each router node of the network overlay on each respective path. The method further includes returning the computed path information to the forwarding router node.
Search apparatus and method
This application provides a search apparatus, including a global dispatcher, a global arbiter, and N search engines. The N search engines can access a first search table. The global dispatcher is configured to: determine that a first search keyword is corresponding to the first search table and dispatch the first search keyword to the N search engines. Each search engine is configured to: search, according to a first search algorithm, one subtable to determine whether an entry that matches the first search keyword exists; and output a search result to the global arbiter. The global arbiter is configured to arbitrate the search result output by each search engine, to obtain a search result corresponding to the first search table.
ALGORITHMIC TCAM BASED TERNARY LOOKUP
An algorithmic TCAM based ternary lookup method is provided. The method stores entries for ternary lookup into several sub-tables. All entries in each sub-table have a sub-table key that includes the same common portion of the entry. No two sub-tables are associated with the same sub-table key. The method stores the keys in a sub-table keys table in TCAM. Each key has a different priority. The method stores the entries for each sub-table in random access memory. Each entry in a sub-table has a different priority. The method receives a search request to perform a ternary lookup for an input data item. A ternary lookup into the ternary sub-table key table stored in TCAM is performed to retrieve a sub-table index. The method performs a ternary lookup across the entries of the sub-table associated with the retrieved index to identify the highest priority matched entry for the input data item.
DISTRIBUTED DATA GRID ROUTING FOR CLUSTERS MANAGED USING CONTAINER ORCHESTRATION SERVICES
A cloud-native architecture for containerized systems using consistent hashing routing is described. A reverse proxy server executing on a container-based cluster of compute nodes managed using a container orchestration service may determine a current data grid topology. The reverse proxy server may receive a first request from a first client device to retrieve first data from the container-based cluster of compute nodes. The request may be parsed to determine a key of a key-value pair and a hash value may be computed using the key. A consistent hashing algorithm may be executed to determine a node associated with the hash value. The first data may be retrieved from the node using the hash value. The first data may be sent to the first client device.
Preemptive caching of content in a content-centric network
Preemptive caching within content/name/information centric networking environment is contemplated. The preemptively caching may be performed within content/name/information centric networking environments of the type having a branching structure or other architecture sufficient to facilitate routing data, content, etc. such that one or more nodes other than a node soliciting a content object also receive the content object.
METHOD AND APPARATUS TO AGGREGATE OBJECTS TO BE STORED IN A MEMORY TO OPTIMIZE THE MEMORY BANDWIDTH
A network device performs packet processing operations on packets received from a network and includes a write back cache to store data (for example, counters) used to perform the packet processing operations. The data stored in the write cache in the network device are evicted from the write back cache to an external memory from time to time using a write-back operation that includes a read-modify-write of a line in the external memory. Instead of performing a separate read-modify-write for each data stored in the cache line, a single read-modify-write operation is performed for all data stored in the cache line in the write back cache. The aggregation of relatively close data for the single read-modify-write operation reduces the number of memory accesses to the external memory and improves the bandwidth to the external memory.
Invalidating cached flow information in a cloud infrastructure
Techniques for managing the distribution of configuration information that supports the flow of packets in a cloud environment are described. In an example, a virtual network interface card (VNIC) hosted on a network virtualization device NVD receives a first packet from a compute instance associated with the VNIC. The VNIC determines that flow information to send the first packet on a virtual network is unavailable from a memory of the NVD. The VNIC sends, via the NVD, the first packet to a network interface service, where the network interface service maintains configuration information to send packets on the substrate network and is configured to send the first packet on the substrate network based on the configuration information. The NVD receives the flow information from the network interface service, where the flow information is a subset of the configuration information. The NVD stores the flow information in the memory.