H04L12/743

CONTROL PACKET TRANSMISSION SYSTEM
20210126855 · 2021-04-29 ·

A control packet transmission system includes a first switch device that, during a first time period, generates and transmits first control packets to a second switch device. Furthermore, a third switch device is provided that, during the first time period, generates and transmits third control packets to the second switch device, and transmits a copy of those third control packets to the first switch device. The first switch then generates respective first hash values using each of the first and third control packets, and generate a first consolidated hash value using each of the respective first hash values. During a subsequent second time period, the first switch device may determine that control data exchanged during the first and second time periods is the same and, in response, transmit the first consolidated hash value to the second switch device in place of any control packets transmitted to the second switch device.

IMPLEMENTING MULTI-TABLE OPENFLOW USING A PARALLEL HARDWARE TABLE LOOKUP ARCHITECTURE
20210119926 · 2021-04-22 ·

Techniques for implementing multi-table OpenFlow using a parallel hardware table lookup architecture are provided. In certain embodiments, these techniques include receiving, at a network device from a software-defined networking (SDN) controller, flow entries for installation into flow tables of the network device, where the flow entries are structured in a manner that assumes the flow tables can be looked-up serially by a packet processor of the network device, but where the flow tables are implemented using hardware lookup tables (e.g., TCAMs) that can only be looked-up in parallel by the packet processor. The techniques further include converting, by the network device, the received flow entries into a format that enables the packet processor to process ingress network traffic correctly using the flow entries, despite the packet processor's parallel lookup architecture, and installing the converted flow entries into the flow tables/hardware lookup tables.

DISTRIBUTED FAULT TOLERANT SERVICE CHAIN

Some embodiments of the invention provide novel methods for performing services on data messages passing through a network connecting one or more datacenters, such as software defined datacenters (SDDCs). The method of some embodiments uses service containers executing on host computers to perform different chains (e.g., ordered sequences) of services on different data message flows. For a data message of a particular data message flow that is received or generated at a host computer, the method in some embodiments uses a service classifier executing on the host computer to identify a service chain that specifies several services to perform on the data message. For each service in the identified service chain, the service classifier identifies a service container for performing the service. The service classifier then forwards the data message to a service forwarding element to forward the data message through the service containers identified for the identified service chain. The service classifier and service forwarding element are implemented in some embodiments as processes that are defined as hooks in the virtual interface endpoints (e.g., virtual Ethernet ports) of the host computer's operating system (e.g., Linux operating system) over which the service containers execute.

PINNING BI-DIRECTIONAL NETWORK TRAFFIC TO A SERVICE DEVICE
20210135993 · 2021-05-06 ·

Techniques for ensuring that, in the context of network traffic load-balanced across a plurality of service devices connected to a network device, all of the bi-directional traffic between a given pair of hosts residing in different domains is sent to the same service device, where a domain is a group of one or more hosts/subnets that is reachable by a service device via an interface of that device. In one set of embodiments, these techniques can include (1) creating a load balancer group on the network device for each domain defined on the service devices, such that the load balancer group for a given domain D includes all of the service device interfaces mapped to D, (2) enabling symmetric hashing with respect to each load balancer group, and (3) synchronizing the hash tables of the load balancer groups such that a given hash bucket (across all hash tables) maps to an interface of a single service device.

ADAPTIVE PRIVATE NETWORK (APN) BANDWITH ENHANCEMENTS

Techniques are described to automatically activate and deactivate standby backup paths in response to changing bandwidth requirements in an adaptive private network (APN). The APN includes one or more regular active wide area network (WAN) links in an active mode and an on-demand WAN link in a standby mode. The on-demand WAN link is activated to supplement the conduit bandwidth when an available bandwidth of the conduit falls below a pre-specified trigger bandwidth threshold and the conduit bandwidth usage exceeds a usage threshold of a bandwidth of the conduit that is being supplied by the active paths (BWc). The on-demand WAN link is deactivated to standby mode when an available bandwidth of the conduit is above the pre-specified trigger bandwidth threshold and the conduit bandwidth usage drops below the usage threshold of BWc techniques for adaptive and active bandwidth testing of WAN links in an APN are also described.

Data differencing across peers in an overlay network
10951739 · 2021-03-16 · ·

A data differencing technique enables a response from a server to the request of a client to be composed of data differences from previous versions of the requested resource. To this end, data differencing-aware processes are positioned, one at or near the origin server (on the sending side) and the other at the edge closest to the end user (on the receiving side), and these processes maintain object dictionaries. The data differencing-aware processes each execute a compression and differencing engine. Whenever requested objects flow through the sending end, the engine replaces the object data with pointers into the object dictionary. On the receiving end of the connection, when the data arrives, the engine reassembles the data using the same object dictionary. The approach is used for version changes within a same host/path, using the data differencing-aware processes to compress data being sent from the sending peer to the receiving peer.

Content set based deltacasting
10951671 · 2021-03-16 · ·

Methods, apparatuses, and systems are provided for improving utilization of the satellite communications system through various deltacasting techniques for handling content sets (e.g., feeds or websites). Embodiments operate in a client-server context, including a server optimizer, a client optimizer, and, in some embodiments, a pre-positioning client. Within this client-server context, content sets are multicast (e.g., anticipatorily pre-positioned in a local dictionary) to end users of the communications system and are handled at the content set level, according to set-level metadata and/or user preferences. In some embodiments, when locally stored information from the content sets is requested by a user, deltacasting techniques are used to generate fingerprints for use in identifying and exploit multicasting and/or other opportunities for increased utilization of links of the communications system.

Handling of acknowledgement in wireless radio ad-hoc networks

There is provided mechanisms for handling acknowledgements from nodes in a wireless radio ad-hoc network. A method is performed by a gateway of the wireless radio ad-hoc network. The method comprises transmitting radio signalling to nodes in the wireless radio ad-hoc network. The transmitted radio signalling is addressed to, and requiring acknowledgement from, at least one node in the wireless radio ad-hoc network. The method comprises receiving radio signalling from a node in the wireless radio ad-hoc network. The received radio signalling comprises one in-packet Bloom Filter comprising acknowledgement of the transmitted radio signalling from at least one of the nodes.

Link Aggregation Based on Estimated Time of Arrival
20210067436 · 2021-03-04 ·

The present disclosure relates to a communication arrangement (110, 130) adapted for link aggregation of a plurality of communication links (120a, 12b, 120c), comprised in an Aggregation Group, AG, (121). The communication arrangement (110, 130) is adapted to communicate via the plurality of communication links (120a, 120b, 120c) and comprises a traffic handling unit (112, 132) that is adapted to obtain data segments (414-423) to be transmitted, and to determine a risk of re-ordering of data segments within a certain data flow (401, 404) comprising a certain data segment (416, 417; 421). Said risk is associated with transmitting said certain data segment via a certain communication link out of the plurality of communication links (120a, 120b, 120c, 120d). The traffic handling unit (112, 132) is furthermore adapted to buffer said certain data segment (416, 417; 421) until the risk of re-ordering satisfies a predetermined criteria, prior to transmitting the said certain data segment (416, 417; 421) via the selected communication link.

Content delivery from home networks

A method for retrieving content on a network comprising a first device and a second device is described. The method includes receiving in the network a request for content from the first device, the request identifying the content using an IPv6 address for the content, and determining whether the content is stored in a cache of the second device. Upon determining the content is stored in the cache of the second device, a request is sent to the second device for the content using the IPv6 address of the content. The content is forwarded to the first device from the second device, wherein the first and second devices are part of the same layer 2 domain. Methods of injecting content to a home network and packaging content are also described.