Patent classifications
H04L45/7453
Distributed resilient load-balancing for multipath transport protocols
Techniques are described for providing a distributed application load-balancing architecture that supports multipath transport protocol for client devices connecting to an application service. Rather than having client devices generate new network five-tuples for new subflows to the application servers, the techniques described herein include shifting the burden to the application servers to ensure that the new network five-tuples land in the same bucket in the consistent hashing table. The application servers may receive a hashing function utilized by the load balancers to generate the hash of the network five-tuple. By having the application servers generate the hashes, the load balancers are able to continue stateless, low-level processing of the packets to route them to the correct application servers. In this way, additional subflows can be opened for client devices according to a multipath transport protocol while ensuring that the subflows are routed to the correct application server.
SOURCE ROUTING WITH SHADOW ADDRESSES
Various example embodiments for supporting source routing are presented herein. Various example embodiments for supporting source routing may be configured to support source route compression for source routing. Various example for supporting source route compression for source routing may be configured to support source route compression for source routing based on use of shadow addresses. Various example for supporting source route compression for source routing based on use of shadow addresses may be configured to support source routing of packets based on use of shadow addresses of hops in place of actual addresses of hops to encode source routes within source routed packets, thereby compressing the source routes within the source routed packets and, thus, providing source route compression.
DATA FLOW TABLE, METHOD AND DEVICE FOR PROCESSING DATA FLOW TABLE, AND STORAGE MEDIUM
Disclosed are a data flow table for high-speed large-scale concurrent data flows, method and apparatus for processing the data flow table, and a storage medium. The method includes: acquiring, according to a data flow identifier of a data flow to be inserted, addresses of candidate buckets in a fingerprint table and a data flow fingerprint; detecting whether there is an idle unit in the candidate buckets in the fingerprint table, if there is an idle unit, selecting the candidate bucket having the idle unit as target bucket; and sequentially searching for the idle unit from bottom unit to top unit of the target bucket, and when the idle unit is found in the target bucket, writing data flow fingerprint of said data flow into the found idle unit, and writing data flow record of said data flow into a recording bucket corresponding to the candidate bucket.
DATA FLOW TABLE, METHOD AND DEVICE FOR PROCESSING DATA FLOW TABLE, AND STORAGE MEDIUM
Disclosed are a data flow table for high-speed large-scale concurrent data flows, method and apparatus for processing the data flow table, and a storage medium. The method includes: acquiring, according to a data flow identifier of a data flow to be inserted, addresses of candidate buckets in a fingerprint table and a data flow fingerprint; detecting whether there is an idle unit in the candidate buckets in the fingerprint table, if there is an idle unit, selecting the candidate bucket having the idle unit as target bucket; and sequentially searching for the idle unit from bottom unit to top unit of the target bucket, and when the idle unit is found in the target bucket, writing data flow fingerprint of said data flow into the found idle unit, and writing data flow record of said data flow into a recording bucket corresponding to the candidate bucket.
Compression of route tables using key values
Described herein are systems, methods, and software to manage the compression of route tables for communication between networking elements. In one implementation, a network device identifies network keys for a route table by replacing attributes in the tables with values. The network device further generates a compressed route table using the route keys and associating each of the route keys with one or more additional attributes. The network device also generates a dictionary to associate each of the values for the route keys to a corresponding attribute of the attributes.
SELECTING INTERFACES FOR DEVICE-GROUP IDENTIFIERS
In one embodiment, a computer networking device calculates a first hash value for an identifier of a group of computing devices, as well as a second hash value for the identifier of the group of computing devices, with each hash value being at least in part on the identifier of the group of computing devices and an identifier of the respective interface. The computer networking device may also analyze the first hash value with respect to the second hash value and select the first interface for association with the identifier of the group of computing devices based at in part on the analyzing. The computer networking device may further store an indication that the identifier of the group of computing devices is associated with the first interface.
SCALING EDGE SERVICES WITH MINIMAL DISRUPTION
Some embodiments provide a method for forwarding data messages between edge nodes that perform stateful processing on flows between a logical network and an external network. At a particular edge node, the method receives a data message belonging to a flow. The edge nodes use a deterministic algorithm to select one of the edge nodes to perform processing for each flow. The method identifies a first edge node to perform processing for the flow in a previous configuration and a second edge node to perform processing for the flow in a new configuration according to the algorithm. When the first and second edge nodes are different, the method uses a probabilistic filter and a stateful connection tracker to determine whether the flow existed prior to a particular time. When the flow did not exist prior to that time, the method selects the second edge node for the received data message.
SCALING EDGE SERVICES WITH MINIMAL DISRUPTION
Some embodiments provide a method for forwarding data messages between edge nodes that perform stateful processing on flows between a logical network and an external network. At a particular edge node, the method receives a data message belonging to a flow. The edge nodes use a deterministic algorithm to select one of the edge nodes to perform processing for each flow. The method identifies a first edge node to perform processing for the flow in a previous configuration and a second edge node to perform processing for the flow in a new configuration according to the algorithm. When the first and second edge nodes are different, the method uses a probabilistic filter and a stateful connection tracker to determine whether the flow existed prior to a particular time. When the flow did not exist prior to that time, the method selects the second edge node for the received data message.
Expansion of packet data within processing pipeline
Some embodiments provide a network forwarding IC with packet processing pipelines, at least one of which includes a parser, a set of match-action stages, and a deparser. The parser is configured to receive a packet and generate a PHV including a first number of data containers storing data for the packet. A first match-action stage is configured to receive the PHV from the parser and expand the PHV to a second, larger number of data containers storing data for the packet. Each of a set of intermediate match-action stage is configured to receive the expanded PHV from a previous stage and provide the expanded PHV to a subsequent stage. A final match-action stage is configured to receive the expanded PHV and reduce the PHV to the first number of data containers. The deparser is configured to receive the reduced PHV from the final match-action stage and reconstruct the packet.
Expansion of packet data within processing pipeline
Some embodiments provide a network forwarding IC with packet processing pipelines, at least one of which includes a parser, a set of match-action stages, and a deparser. The parser is configured to receive a packet and generate a PHV including a first number of data containers storing data for the packet. A first match-action stage is configured to receive the PHV from the parser and expand the PHV to a second, larger number of data containers storing data for the packet. Each of a set of intermediate match-action stage is configured to receive the expanded PHV from a previous stage and provide the expanded PHV to a subsequent stage. A final match-action stage is configured to receive the expanded PHV and reduce the PHV to the first number of data containers. The deparser is configured to receive the reduced PHV from the final match-action stage and reconstruct the packet.