Patent classifications
H04L49/506
DISTRIBUTED STORAGE SYSTEM AND METHOD FOR MANAGING STORAGE ACCESS BANDWIDTH FOR MULTIPLE CLIENTS
System and method for managing storage requests issued from multiple sources in a distributed storage system utilizes different queues at a host computer in the distributed storage system to place different classes of storage requests for access to a virtual storage area network. The storage requests in the queues are processed using a fair scheduling algorithm. For each queue, when the storage requests in the queue exceeds a threshold, a backpressure signal is generated and transmitted to at least one source for a class of storage requests queued in one of the queues corresponding to that backpressure signal to delay issuance of new storage requests of that class of storage requests.
CONTROL WAVELET FOR ACCELERATED DEEP LEARNING
Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow based computations on wavelets of data. Each processing element has a compute element and a routing element. Each compute element has memory. Each router enables communication via wavelets with nearest neighbors in a 2D mesh. A compute element receives a wavelet. If a control specifier of the wavelet is a first value, then instructions are read from the memory of the compute element in accordance with an index specifier of the wavelet. If the control specifier is a second value, then instructions are read from the memory of the compute element in accordance with a virtual channel specifier of the wavelet. Then the compute element initiates execution of the instructions.
NETWORK PATH MEASUREMENT METHOD, APPARATUS, AND SYSTEM
A network path measurement method. The method includes: obtaining a first aggregate available bandwidth of a path from the first switching node to a second switching node; obtaining a first available bandwidth of a path from a first target port of a third switching node to the first switching node, where the third switching node is a next-stage switching node connected to the first switching node; obtaining, a second available bandwidth of a path from the second switching node to a fourth switching node, where the fourth switching node is a next-stage switching node connected to the second switching node; and determining a second aggregate available bandwidth of a path from the first target port of the third switching node to the fourth switching node, the second aggregate available bandwidth is a smallest available bandwidth among the first aggregate available bandwidth, the first available bandwidth, and the second available bandwidth.
PROCESSING METHOD, SYSTEM, PHYSICAL DEVICE AND STORAGE MEDIUM BASED ON DISTRIBUTED STREAM COMPUTING
A processing method based on distributed stream computing is performed by a computing device. After receiving flow data sent by an upstream computing node, the computing device stores the flow data into a flow data pool. The computing device collects a data ratio of the flow data in the flow data pool to a total capacity of the flow data pool. When the collected data ratio is greater than or equal to a first threshold, the computing device performs an operation of adding a mark for enabling the upstream computing node to decrease a flow data sending rate; when the collected data ratio is less than or equal to a second threshold, the second threshold being less than the first threshold, the computing device performs an operation of deleting the mark for enabling the upstream computing node to increase the flow data sending rate.
DATA PROCESSING METHOD AND PHYSICAL MACHINE
The present invention provide the data processing method: predicting traffic of a to-be-processed data stream of the first executor in a first time period according to historical information about processing data by the first executor, so as to obtain prediction information of the traffic of the data stream in the first time period, where the historical information includes traffic information of data processed by the first executor in a historical time period, and the traffic prediction information includes predictors of traffic at multiple moments in the first time period; if the traffic prediction information includes a predictor that exceeds a threshold, reducing a data obtaining velocity of the first executor from a first velocity to a second velocity; and obtaining a first data set of the to-be-processed data stream at the second velocity.
Network Interface Device
Roughly described: a network interface device has an interface. The interface is coupled to first network interface device circuitry, host interface circuitry and host offload circuitry. The host interface circuitry is configured to interface to a host device and has a scheduler configured to schedule providing and/or receiving of data to/from the host device. The interface is configured to allow at least one of: data to be provided to said host interface circuitry from at least one of said first network device interface circuitry and said host offload circuitry; and data to be provided from said host interface circuitry to at least one of said first network interface device circuitry and said host offload circuitry.
FINE-GRANULARITY ADMISSION AND FLOW CONTROL FOR RACK-LEVEL NETWORK CONNECTIVITY
A system for admission and flow control is disclosed. In some embodiments, the system includes a switch for routing network traffic, having multiple classes of service (CoSs), from multiple ingress ports to one or more of multiple egress ports. The system also includes multiple ingress-level class of service queues (InCoS-Qs) and one or more egress-level class of service queues (EgCoS-Qs), each InCoS-Q and EgCoS-Q corresponding to one of CoSs. The switch is configured to detect congestion in a particular EgCoS-Q, corresponding to a particular CoS, the particular EgCoS-Q being associated with a particular host; identify an InCoS-Q corresponding to that particular CoS, and associated with that particular host; and block that InCoS-Q, while allowing routing of the network traffic from one or more InCoS-Qs corresponding to that particular CoS, the one or more InCoS-Qs corresponding to one or more other hosts.
BACK-PRESSURE CONTROL IN A TELECOMMUNICATIONS NETWORK
Back-pressure control in a telecommunications network, in which a method of back-pressure control in a transport network is provided. A buffer state of a buffer is monitored. A condition indicative of back-pressure is also determined in response to a change of the buffer state passing a predetermined limit. In response to determining the condition indicative of back-pressure, a back-pressure notification message is created and, subsequently, transmitted to at least one second network node.
HIGH-PERFORMANCE DATA REPARTITIONING FOR CLOUD-SCALE CLUSTERS
Techniques herein partition data using data repartitioning that is store-and-forward, content-based, and phasic. In embodiments, computer(s) maps network elements (NEs) to grid points (GPs) in a multidimensional hyperrectangle. Each NE contains data items (DIs). For each particular dimension (PD) of the hyperrectangle the computers perform, for each particular NE (PNE), various activities including: determining a linear subset (LS) of NEs that are mapped to GPs in the hyperrectangle at a same position as the GP of the PNE along all dimensions of the hyperrectangle except the PD, and data repartitioning that includes, for each DI of the PNE, the following activities. The PNE determines a bit sequence based on the DI. The PNE selects, based on the PD, a bit subset of the bit sequence. The PNE selects, based on the bit subset, a receiving NE of the LS. The PNE sends the DI to the receiving NE.
Fine-granularity admission and flow control for rack-level network connectivity
A system for admission and flow control is disclosed. In some embodiments, the system includes a switch for routing network traffic, having multiple classes of service (CoSs), from multiple ingress ports to one or more of multiple egress ports. The system also includes multiple ingress-level class of service queues (InCoS-Qs) and one or more egress-level class of service queues (EgCoS-Qs), each InCoS-Q and EgCoS-Q corresponding to one of CoSs. The switch is configured to detect congestion in a particular EgCoS-Q, corresponding to a particular CoS, the particular EgCoS-Q being associated with a particular host; identify an InCoS-Q corresponding to that particular CoS, and associated with that particular host; and block that InCoS-Q, while allowing routing of the network traffic from one or more InCoS-Qs corresponding to that particular CoS, the one or more InCoS-Qs corresponding to one or more other hosts.