Patent classifications
H04L49/501
Estimating model parameters for automatic deployment of scalable micro services
One aspect of the disclosure relates to, among other things, a method for optimizing and provisioning a software-as-a-service (SaaS). The method includes determining a graph comprising interconnected stages for the SaaS, wherein each stage has a replication factor and one or more metrics that are associated with one or more service level objectives of the SaaS, determining a first replication factor associated with a first one of the stages which meets a first service level objective of the SaaS, adjusting the first replication factor associated with the first one of the stage based on the determined first replication factor, and provisioning the SaaS onto networked computing resources based on the graph and replication factors associated with each stage.
Uplink port oversubscription determination
In some examples, a method can include monitoring data traffic along an uplink port and along at least a subset of a plurality of host ports, determining whether the uplink port is oversubscribed based on the monitored data traffic, determining whether a given host port of the at least a subset of host ports is receiving excessive data traffic in response to determining that the uplink port is oversubscribed, and flagging a host port that is determined to be receiving excessive data traffic.
Devices and methods of using network function virtualization and virtualized resources performance data to improve performance
Devices and methods of providing performance measurements (PMs) for Network Function Virtualization are generally described. A Virtual Network Function (VNF) PM job is scheduled at a VNF and VNF PM data received in response. From the VNF PM data, it is determined that virtualized resource (VR) management may be a cause of poor VNF performance. A VR PM job is scheduled and results in VR PM data. The VR PM and VNF PM data are analyzed to determine whether to increase the VR at the VNF. If an increase is determined, a request for the increase is transmitted from an element manager to a VNF manager or the VNF PM and/or VR PM data are provided to a Network Manager (NM) for the NM to request the increase by a Network Function Virtualization Orchestrator (NFVO).
PREDICTIVE HANDOVER OF TRAFFIC IN AN AGGREGATION NETWORK
An MC-LAG system may operate to monitor load conditions existing in two network switches, and to compute a load index value based on detected load conditions. If a computed load index value for a first switch is determined to exceed a predetermined threshold, an overloaded switch may predictively cause traffic to be routed to a second switch prior to rebooting of the first switch. Load index values may be computed based upon factors including excessive inter-switch link (ISL) flapping, excessive MAC flush or MAC move operations in a switch, excessive processing resource utilization in a switch.
PREDICTABLE VIRTUALIZED NIC
A method for controlling congestion in a datacenter network or server is described. The server includes a processor configured to host a plurality of virtual machines and an ingress engine configured to maintain a plurality of per-virtual machine queues configured to store received packets. The processor is also configured to execute a CPU-fair fair queuing process to control the processing of the packets by the processor. The processor is also configured to selectively trigger temporary packet per second packet transmission limits on top of a substantially continuously enforced bit per second transmission limit upon detection of a per virtual machine queue overload.
Dynamically reconfiguring data plane of forwarding element to adjust data plane throughput based on detected conditions
Some embodiments of the invention provide a network forwarding element that can be dynamically reconfigured to adjust its data message processing to stay within a desired operating temperature or power consumption range. In some embodiments, the network forwarding element includes (1) a data-plane forwarding circuit (data plane) to process data tuples associated with data messages received by the IC, and (2) a control-plane circuit (control plane) for configuring the data plane forwarding circuit. The data plane includes several data processing stages to process the data tuples. The data plane also includes an idle-signal injecting circuit that receives from the control plane configuration data that the control plane generates based on the IC's temperature. Based on the received configuration data, the idle-signal injecting circuit generates idle control signals for the data processing stages. Each stage that receives an idle control signal enters an idle state during which the majority of the components of that stage do not perform any operations, which reduces the power consumed and temperature generated by that stage during its idle state.
Host device with multi-path layer configured for detection and resolution of oversubscription conditions
An apparatus comprises a host device configured to communicate over a network with a storage system comprising a plurality of storage devices. The host device comprises a set of input-output queues and a multi-path input-output driver configured to select input-output operations from the set of input-output queues for delivery to the storage system over the network. The multi-path input-output driver is further configured to maintain payload size counters to track outstanding command payload for respective ones of a plurality of paths from the host device to the storage system, to detect an oversubscription condition relating to at least one of the paths based at least in part on values of one or more of the payload size counters, and to initiate one or more automated actions responsive to the detected oversubscription condition. For example, automated deployment of one or more additional paths associated with respective spare communication links between the host device and the storage system may be initiated.
SWITCHING AND LOAD BALANCING TECHNIQUES IN A COMMUNICATION NETWORK
A source access network device multicasts copies of a packet to multiple core switches, for switching to a same target access network device. The core switches are selected for the multicast based on a load balancing algorithm managed by a central controller. The target access network device receives at least one of the copies of the packet and generates at least metric indicative of a level of traffic congestion at the core switches and feeds back information regarding the recorded at least one metric to the controller. The controller adjusts the load balancing algorithm based on the fed back information for selection of core switches for a subsequent data flow.
DYNAMICALLY RECONFIGURING DATA PLANE OF FORWARDING ELEMENT TO ACCOUNT FOR OPERATING TEMPERATURE
Some embodiments of the invention provide a network forwarding element that can be dynamically reconfigured to adjust its data message processing to stay within a desired operating temperature or power consumption range. In some embodiments, the network forwarding element includes (1) a data-plane forwarding circuit (data plane) to process data tuples associated with data messages received by the IC, and (2) a control-plane circuit (control plane) for configuring the data plane forwarding circuit. The data plane includes several data processing stages to process the data tuples. The data plane also includes an idle-signal injecting circuit that receives from the control plane configuration data that the control plane generates based on the IC's temperature. Based on the received configuration data, the idle-signal injecting circuit generates idle control signals for the data processing stages. Each stage that receives an idle control signal enters an idle state during which the majority of the components of that stage do not perform any operations, which reduces the power consumed and temperature generated by that stage during its idle state.
Bufferbloat recovery and avoidance systems and methods
Systems and methods for bufferbloat recovery and avoidance are provided herein. A portion of the buffer can be compressed based on one or more thresholds without changing an order of packet transmission and without dropping packets. The method includes storing, by a device, a plurality of packets received by the device to a buffer. The buffer can be configured with a minimum threshold and a maximum threshold. The method includes detecting that a size of the buffer has reached at least the maximum threshold and compressing one or more packets of the plurality of packets stored between the minimum threshold and the maximum threshold while transmitting, during compression, at least a portion of one or more packets of the plurality of packets stored in the buffer below the minimum threshold.