Patent classifications
H04L43/024
Systems and methods for detecting large network flows
In a system for efficiently detecting large/elephant flows in a network, the rate at which the received packets are sampled is adjusted according to the measured heavy tailedness of the arriving traffic, such that the measured heavy tailedness reaches a specified target level. The heavy tailedness is measured using the estimated sizes of different flows associated with the arriving packets. When the measured heavy tailedness reaches and remains at the specified target level, the flows having the largest estimated sizes are likely to be the largest/elephant flows in the network.
SYSTEM AND METHOD FOR REDUCTION OF DATA TRANSMISSION BY INFORMATION CONTROL WITH REINFORCED LEARNING
Methods and systems for managing data collection are disclosed. To manage data collection, a system may include a data aggregator and data collectors. The data aggregator may utilize an inference model to predict the future operation of data collectors, and a pattern selection model to sample data from data collectors at a specific frequency and sequence. The pattern may specify that some data collectors are not to be sampled at various points in time. By doing so, the system may be able to transmit less data, consume less network bandwidth, and consume less energy throughout a distributed system while still providing access to aggregated data.
Adaptive in-band network telemetry for full network coverage
A mechanism for adaptively performing in-band network telemetry (INT) by a network controller is disclosed. The mechanism includes receiving one or more congestion indicators from a collector. An adjusted sampling rate is generated. The adjusted sampling rate is a specified rate of insertion of instruction headers for INT and is generated based on the congestion indicators. The adjusted sampling rate is transmitted to a head node, which is configured to perform INT via instruction header insertion into user packets.
AUTONOMOUS CLOUD-NODE SCOPING FRAMEWORK FOR BIG-DATA MACHINE LEARNING USE CASES
Systems, methods, and other embodiments associated with autonomous cloud-node scoping for big-data machine learning use cases are described. In some example embodiments, an automated scoping tool, method, and system are presented that, for each of multiple combinations of parameter values, (i) set a combination of parameter values describing a usage scenario, (ii) execute a machine learning application according to the combination of parameter values on a target cloud environment, and (iii) measure the computational cost for the execution of the machine learning application. A recommendation regarding configuration of central processing unit(s), graphics processing unit(s), and memory for the target cloud environment to execute the machine learning application is generated based on the measured computational costs.
TECHNOLOGIES FOR CAPTURING PROCESSING RESOURCE METRICS AS A FUNCTION OF TIME
Technologies for collecting metrics associated with a processing resource (e.g., central processing unit (CPU) resources, accelerator device resources, and the like) over a time window are disclosed. According to an embodiment presented herein, a network device receives, in an edge network, a request to provide one or more metrics associated with a processing resource, the request specifying a window indicative of a time period to capture the one or more metrics. The network device obtains the one or more metrics from the processing resource for the specified window and provides the obtained one or more metrics in response to the request.
METHOD, DEVICE, AND SYSTEM FOR CONFIGURING PARAMETERS, COMPUTER DEVICE, MEDIUM, AND PRODUCT
The present disclosure relates to a method, device, and system for configuring parameters, a computer device, a medium, and a product. A configuration device for configuring parameter sampling with respect to an edge device includes: one information acquiring unit, configured to acquire information related to the purpose and use environment of the edge device; one transmitting unit, configured to transmit the information to a cloud platform; and one configuration information determining unit, configured to receive configuration information for parameter sampling with respect to the edge device from the cloud platform, where the configuration information is configuration information determined as matching the information by the cloud platform utilizing a configuration model stored thereby.
PRACTICAL OVERLAY NETWORK LATENCY MEASUREMENT IN DATACENTER
Some embodiments provide a method of identifying packet latency in a software defined datacenter (SDDC) that includes a network and multiple host computers executing multiple machines. At a first host computer, the method identifies and stores (i) multiple time values associated with several packet processing operations performed on a particular packet sent by a first machine executing on the first host computer, and (ii) a time value associated with packet transmission through the SDDC network from the first host computer to a second host computer that is a destination of the particular packet. The method provides the stored time values to a set of one or more controllers to process to identify multiple latencies experienced by multiple packets processed in the SDDC.
Network flow sampling fairness
In one embodiment, a network flow sampling system includes packet processing circuitry to process data packets of multiple network flows, and an adaptive policer to, for each one network flow of the multiple network flows compute a quantity of flow-specific sampling credits to be assigned to the one network flow responsively to a quantity of the network flows currently being processed by the packet processing circuitry, assign the flow-specific sampling credits to the one network flow, sample at least one of the data packets of the one network flow responsively to availability of the flow-specific sampling credits of the one network flow yielding sampled data, while applying sampling fairness among the network flows, and remove at least one of the flow-specific sampling credits of the one network flow from availability responsively to sampling the at least one data packet of the one network flow.
Adaptive flow monitoring
An example network device includes memory, a communication unit, and processing circuitry coupled to the memory and the communication unit. The processing circuitry is configured to receive first samples of flows from an interface of another network device sampled at a first sampling rate and determine a first parameter based on the first samples. The processing circuitry is configured to receive second samples of flows from the interface sampled at a second sampling rate, wherein the second sampling rate is different than the first sampling rate and determine a second parameter based on the second samples. The processing circuitry is configured to determine a third sampling rate based on the first parameter and the second parameter, control the communication unit to transmit a signal indicative of the third sampling rate to the another network device; and receive third samples of flows from the interface sampled at the third sampling rate.
Computer network service providing system including self adjusting volume enforcement functionality
A Computer Network Service Providing System including Self Adjusting Volume enforcement functionality and methods for diminishing or minimizing volume leakage.