H04L12/803

Systems and methods for adaptive routing
09853884 · 2017-12-26 · ·

Systems and methods for performing routing are described. For each of a plurality of messages transmitted over a primary route, a message transmission indication is received by an application. The application further receives, for at least one of the messages, a conversion indication that is based on the transmitted message. The quality of the primary route is determined based on a subset or all of the transmission indications and a subset or all of the conversion indications. Based on this determination, an alternate route is selected to replace the primary route.

Using consistent hashing for ECMP routing

ECMP routing is carried out in fabric of network entities by representing valid destinations and invalid destinations in a group of the entities by a member vector. The order of the elements in the member vector is permuted and fanned out. A portion of the elements in the fanned out vector is pseudo-randomly masked. A flow of packets is transmitted to the first valid destination in the masked member vector.

CLUSTERING LAYERS IN MULTI-NODE CLUSTERS
20170366624 · 2017-12-21 ·

Examples include a multi-node cluster having a node with a clustering layer. The clustering layer may be located between an application programming interface (API) layer and a service layer and the multi-node cluster may be associated with a database. In some examples, the clustering layer may discover whether a number of nodes associated with the multi-node cluster has changed. Based, at least in part, on the determination that the number of nodes associated with the multi-node cluster has changed, at the clustering layer, the database may be sharded and a new API call may be issued to the API layer.

Network load balancing and overload control
09847942 · 2017-12-19 · ·

Load balancing and overload control techniques are disclosed for use in a SIP-based network or other type of network comprising a plurality of servers. In a load balancing technique, a first server receives feedback information from at least first and second downstream servers associated with respective first and second paths between the first server and a target server, the feedback information comprising congestion measures for the respective downstream servers. The first server dynamically adjusts a message routing process based on the received feedback information to compensate for imbalance among the congestion measures of the downstream servers. In an overload control technique, the first server utilizes feedback information received from at least one downstream server to generate a blocking message for delivery to a user agent.

Systems and methods for traffic load balancing on multiple WAN backhauls and multiple distinct LAN networks

In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for traffic aggregation on multiple WAN backhauls and multiple distinct LAN networks; for traffic load balancing on multiple WAN backhauls and multiple distinct LAN networks; and for performing self-healing operations utilizing multiple WAN backhauls serving multiple distinct LAN networks. For example, in one embodiment, a first Local Area Network (LAN) access device is to establish a first LAN; a second LAN access device is to establish a second LAN; a first Wide Area Network (WAN) backhaul connection is to provide the first LAN access device with WAN connectivity; a second WAN backhaul connection to provide the second LAN access device with WAN connectivity; a management device is communicatively interfaced with each of the first LAN access device, the second LAN access device, the first WAN backhaul connection, and the second WAN backhaul connection; and the management device routes a first portion of traffic originating from the first LAN over the first WAN backhaul connection and routes a second portion of the traffic originating from the first LAN over the second WAN backhaul connection.

Apparatus and method for hardware-accelerated packet processing

Devices and techniques for hardware accelerated packet processing are described herein. A device can communicate with one or more hardware switches. The device can detect characteristics of a plurality of packet streams. The device may distribute the plurality of packet streams between the one or more hardware switches and software data plane components based on the detected characteristics of the plurality of packet streams, such that at least one packet stream is designated to be processed by the one or more hardware switches. Other embodiments are also described.

MULIT-MOBILE CORE NETWORKS AND VALUE-ADDED SERVICES

A method is provided in one example embodiment and includes receiving at a network element a packet associated with a flow and determining whether a flow cache of the network element includes an entry for the flow indicating a classification for the flow. The method further includes, if the network element flow cache does not include an entry for the flow, punting the packet over a default path to a classifying service function, in which the classifying service function classifies the flow and determines a control plane service function for handling the flow, and receiving from the classifying service function a service path identifier (“SPI”) of a service path leading to the determined control plane service function. The flow is subsequently offloaded from the classifying service function to the network element.

PROACTIVE LOAD BALANCING BASED ON FRACTAL ANALYSIS

The disclosure relates to technology for load balancing link utilization of a networking device based on fractal analysis. In one embodiment, link utilization of switches, routers, etc. in a data center is balanced based on a fractal model of the link utilization. Techniques disclosed herein are proactive. For example, instead of reacting to link congestion, the technique predicts future link utilization based on fractal analysis. Then, packet flows (or flowlets) may be assigned to links based on the predicted future link utilization. Hence, congestion on links may be reduced or prevented.

Distributed control system and control method thereof

In a distributed control system including a central communication device, terminal communication devices to which target devices to be controlled are connected, and a network including multiple communication paths connecting the central communication device and terminal communication devices, each terminal communication device includes a calculation input/output performance storage unit storing input/output performance of a calculation unit for controlling the target devices to be controlled and a control input/output performance storage unit storing the performance of the input/output control units of the target devices to be controlled, and the central communication device collects input/output performance information previously stored in these storage units, determines communication paths of the terminal communication units and a packet division method in such a manner that an amount of a communication data of a network and response performance requested of the distributed control system are satisfied, on the basis of the collected input/output performance information and sets the determination results in the terminal communication devices.

Transparent network-services elastic scale-out

In a network with at least a first device already configured to provide a network service to a network application, scaling service capacity includes: configuring one or more second devices to provide the network service to the network application. In embodiments where an upstream network device supports Equal-Cost Multi-Path (ECMP) routing, the upstream network device is configured, including storing a plurality of paths to reach an address associated with a network application, wherein the plurality of paths are equal in cost. In embodiments where the upstream network device does not support ECMP routing, the second device is configured not to respond to an Address Resolution Protocol (ARP) request associated with an Internet Protocol (IP) address of the network application, and the first device is instructed to perform load balancing on network traffic destined for the network application among the first device and the one or more second devices.