H04L12/743

Methods and apparatus for configuring a standby WAN link in an adaptive private network
11108677 · 2021-08-31 · ·

Techniques for providing a backup network path using a standby wide area network (WAN) link with reducing monitoring. Packet loss and latency metrics are monitored for network paths in an adaptive private network (APN) connecting a first user and a second user according to control traffic operating at a first control bandwidth for each network path. A determination is made that a first network path uses a standby WAN link, has packet loss and latency metrics indicative of a good quality state, and has at least one characteristic that identifies the first network path as a backup network path. The control traffic is then reduced for the backup network path to a second control bandwidth substantially less than the first control bandwidth. The backup network path is made active when the number of active network paths is less than or equal to a minimum number.

SERVER APPARATUS, EDGE EQUIPMENT, PROCESS PATTERN SPECIFYING METHOD, AND CONTROL PROGRAM
20210281513 · 2021-09-09 · ·

In a server 10, a communication unit 12 receives a signal including processed data and a Bloom filter in accordance with a process pattern executed on the processed data transmitted from an edge equipment 20-2 directly connected to the server 10. A process pattern specifying unit 13 specifies a process pattern executed on the processed data received by the communication unit 12 based on the “process pattern list” and the Bloom filter received in the communication unit 12.

SCAN PROTECTION WITH RATE LIMITING
20210274013 · 2021-09-02 ·

Techniques described herein improve network security and traffic management. In an embodiment, a request associated with an identifier (ID) is received. It is determined whether the ID exists in a first membership database (MDB). If the ID exists in the first MDB, the request is serviced subject to a rate limit. If the ID does not exist in the first MDB, it is determined whether the ID exists in a second MDB. If the ID exists in the second MDB, the request is serviced. If the ID does not exist in the second MDB, the request is serviced subject to another rate limit. A response is received. The first and second MDBs can be updated based on the type of received response. In an embodiment, the response is classified as indicative of degraded or typical network performance, and the first and second MDBs are updated accordingly.

Techniques for reducing the overhead of providing responses in a computing network

An endpoint in a network may make posted or non-posted write requests to another endpoint in the network. For a non-posted write request, the target endpoint provides a response to the requesting endpoint indicating that the write request has been serviced. For a posted write request, the target endpoint does not provide such an acknowledgment. Hence, posted write requests have lower overhead, but they suffer from potential synchronization and resiliency issues. While non-posted write requests do not have those issues, they cause increased load on the network because such requests require the target endpoint to acknowledge each write request. Introduced herein is a network operation technique that uses non-posted transactions while maintaining a load overhead of the network as a manageable level. The introduced technique reduces the load overhead of the non-posted write requests by collapsing and reducing a number of the responses.

Service assurance of ECMP using virtual network function hashing algorithm

Techniques are presented for evaluating Equal Cost Multi-Path (ECMP) performance in a network that includes a plurality of nodes. According to an example embodiment, a method is provided that includes obtaining information indicating equal cost multi-path (ECMP) paths in the network and a branch node in the network. For the branch node in the network, the method includes instantiating a virtual network function that simulates an ECMP hashing algorithm employed by the branch node to select one of multiple egress interface of the branch node; providing to the virtual network function for the branch node, a query containing entropy information as input to the ECMP hashing algorithm that returns interface selection results; and obtaining from the virtual network function a reply that includes the interface selection results. The method further includes evaluating ECMP performance in the network based on the interface selection results obtained for the branch node.

Stateful network router for managing network appliances

Disclosed are various embodiments of a stateful network router. In one embodiment, a stateful network router intercepts a network data connection between a first host and a second host on a network. The stateful network router routes first data packets from the network data connection sent by the first host to the second host to a target. The stateful network router also routes second data packets from the network data connection sent by the second host to the first host to the target.

Traffic load balancing between a plurality of points of presence of a cloud computing infrastructure

Methods and system of traffic load balancing between a plurality of Points of Presence (PoP) of a cloud computing infrastructure are described. A first PoP of multiple PoPs of cloud computing infrastructure that provides a cloud computing service receives a packet. The packet includes as a destination address an anycast address advertised by the first PoP for reaching the cloud computing service. The first PoP identifies a network address of a second PoP that is different from the first PoP. The first PoP forwards the packets as an encapsulated packet to the second PoP to be processed in the second PoP according to the cloud computing service.

Port allocation at distributed network address translators

A node of a network address translator obtains a first packet. A particular port number to be used as a substitute port for a packet flow associated with the first packet is determined using at least a first intermediate hash result, a particular flow hash value range assigned to the node, and a lookup table. The first intermediate hash result is obtained from a flow tuple of the first packet, and the lookup table comprises an entry indicating a mapping between the particular port number and a second intermediate hash result. A second packet, in which the source port is the set to the substitute port number, is transmitted to a recipient indicated in the first packet.

Scalable network function virtualization service

A network function virtualization service includes an action implementation layer and an action decisions layer. On a flow of network traffic received at the service, the action implementation layer performs a packet processing action determined at the action decisions layer.

SELECTIVELY CONNECTABLE CONTENT-ADDRESSABLE MEMORY
20210266260 · 2021-08-26 ·

A switching system includes a content-addressable memory (CAM) and several processing nodes. The CAM can be selectively connected to any one or more of the processing nodes during operation of the switching system, without having to power down or otherwise reboot the switching system. The CAM is selectively connected to a processing node in that electrical paths between the CAM and the processing nodes can be established, torn down, and re-established during operation of the switching system. The switching system can include a connection matrix to selectively establish electrical paths between the CAM and the processing nodes.