Patent classifications
H04L12/911
DYNAMIC LOOKUP OPTIMIZATION FOR PACKET CLASSIFICATION
A method is implemented by a network device to dynamically optimize lookup speed in a packet processing table maintained at the network device while the network device is in operation. The method includes determining one or more runtime metrics of the packet processing table, selecting a lookup algorithm for the packet processing table from a set of lookup algorithms supported by the network device based on the one or more runtime metrics of the packet processing table, and configuring the network device to match incoming packets against rules in the packet processing table using the selected lookup algorithm for the packet processing table.
LOW-REDISTRIBUTION LOAD BALANCING
A load-balancing computing device receives a load-balance request for a processing of a workload request associated with a workload. The load-balancing computing device selects a member node of a distributed computing system to process the workload request. The member node is selected from amongst a pool of member nodes of the distributed computing system. The selecting includes: determining a member node for a baseline assignment for the workload; and selecting a member node based on an outcome of a mathematical operation performed on an identifier of the workload, the baseline cardinality of member nodes, and on the cardinality of member nodes in the pool. Next, the processing of the workload request is assigned to the selected member node.
MANAGING CLUSTER-LEVEL PERFORMANCE VARIABILITY WITHOUT A CENTRALIZED CONTROLLER
Systems, apparatuses, and methods for managing cluster-level performance variability without a centralized controller are described. Each node of a multi-node cluster tracks a maximum and minimum progress across the plurality of nodes for a workload executed by the cluster. Each node also tracks its local progress on its current task. Each node also utilizes a comparison of the local progress to reported maximum and minimum progress across the cluster to identify a critical, or slow, node and whether to increase or reduce an amount of power allocated to the node. The nodes append information about the maximum and minimum progress to messages sent to other nodes to report their knowledge of maximum and minimum progress with other nodes. A node updates its local information if the node receives a message from another node with more up-to-date information about the state of progress across the cluster.
NETWORK CHANNELS PRIMITIVES
Networks primitives are provided for establishing and maintaining channels and secure channels. In one embodiment, requests to open a new channel are handled only in a listen mode and, after authentication, the channel provides secure communication. In one embodiment, a secure channel is initialized and fixed if broken so that a plurality of threads may share it. In one embodiment, a no listen mode is applied if the number of new channels handled per time period is more than a threshold.
LOAD BALANCING BACK-END APPLICATION SERVICES UTILIZING DERIVATIVE-BASED CLUSTER METRICS
Some embodiments include a back-end routing engine. The engine can receive traffic data characterizes amount of service requests from front-end servers to a server group of one or more back-end servers that corresponds to a geographical tier in a server group hierarchy. The engine can receive metric measurements in a performance metric dimension for the server group and a performance threshold corresponding to the performance metric dimension and the geographical tier. The engine can estimate a linear derivative between variable traffic data and variable performance metric in the performance metric dimension based on collected sample points respectively representing the traffic data and the metric measurement. The engine can then compute, based on the linear derivative and the performance threshold, a threshold traffic capacity of the server group. The engine can then generate a routing table based on the threshold traffic capacity.
Fog Computing Network Resource Partitioning
Various implementations disclosed herein enable improved allocation of fog node resources, which supports performance driven partitioning of competing client applications. In various implementations, methods are performed by a fog orchestrator configured to determine allocations of fog resources for competing client applications and partition the competing client applications based on the fog resource allocations. Methods include receiving reservation priority values (RPVs) associated with a plurality of client applications competing for a contested fog node resource, transmitting, to a subset of client devices, a request to provide updated RPVs, and awarding the contested fog node resource to one of the plurality of client applications based on the received RPVs and any updated RPVs. In various implementations, methods also include determining, for each of the plurality of client applications, a respective mapping for a respective plurality of separable components of the client application based on the awarded contested fog node resource.
RDMA-OVER-ETHERNET STORAGE SYSTEM WITH CONGESTION AVOIDANCE WITHOUT ETHERNET FLOW CONTROL
An apparatus for data storage management includes one or more processors, and an interface for connecting to a communication network that connects one or more servers and one or more storage devices. The one or more processors are configured to receive a configuration of the communication network, including a definition of multiple network connections that are used by the servers to access the storage devices using a remote direct memory access protocol transported over a lossy layer-2 protocol, to calculate, based on the configuration, respective maximum bandwidths for allocation to the network connections, and to reduce a likelihood of congestion in the communication network, notwithstanding the lossy layer-2 protocol, by instructing the servers and the storage devices to comply with the maximum bandwidths.
Systems and methods for traffic load balancing on multiple WAN backhauls and multiple distinct LAN networks
In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for traffic aggregation on multiple WAN backhauls and multiple distinct LAN networks; for traffic load balancing on multiple WAN backhauls and multiple distinct LAN networks; and for performing self-healing operations utilizing multiple WAN backhauls serving multiple distinct LAN networks. For example, in one embodiment, a first Local Area Network (LAN) access device is to establish a first LAN; a second LAN access device is to establish a second LAN; a first Wide Area Network (WAN) backhaul connection is to provide the first LAN access device with WAN connectivity; a second WAN backhaul connection to provide the second LAN access device with WAN connectivity; a management device is communicatively interfaced with each of the first LAN access device, the second LAN access device, the first WAN backhaul connection, and the second WAN backhaul connection; and the management device routes a first portion of traffic originating from the first LAN over the first WAN backhaul connection and routes a second portion of the traffic originating from the first LAN over the second WAN backhaul connection.
Method of dynamic discontinuous operation from a distribution point
Methods and apparatus to transmit data are disclosed. An embodiment comprises providing transmission opportunities for data to be transmitted. A transmission opportunity can comprise a payload portion for payload. The method comprises transmitting the payload portion. The payload portion comprises a beginning portion from beginning of the payload portion and a completion portion to completion of the payload portion. An embodiment comprises transmitting control information after the beginning portion is transmitted and before the completion portion of the payload portion is transmitted. In an embodiment the control information is indicative of a future completion of the transmitting the payload portion.
Dynamic bandwidth allocation systems and methods using content identification in a software-defined networking controlled multi-layer network
A method, a Software-Defined Networking (SDN) controller, and a network include operation of a multi-layer SDN network and uniquely identifying streaming content on higher layers relative to the multi-layer SDN network through deep packet inspection; associating the streaming content to a multi-layer service on the SDN network; and monitoring the streaming content on the SDN network over the multi-layer service. This can include dynamically adjusting bandwidth of the multi-layer service utilizing OpenFlow on the SDN network based on the monitoring. The deep packet inspection can utilize a Bloom filter embedded in a resource identifier of the streaming content by the content provider, wherein the embedded Bloom filter is transparent to content players and does not require changes to storage on associated web servers for the streaming content.