H04L12/825

System and method for determining cell congestion level

A metric of cell congestion is determined using an average physical resource block rate that is expected to be allocated for a very active user equipment bearer that is a persistent average rate. The average physical resource block rate is mapped to congestion levels and this information is exported to an application function or a radio access network function in order to mitigate congestion. An average bearer throughput for the user equipment can be calculated based on the average physical resource block rate.

METHOD AND USER EQUIPMENT FOR EFFECTIVE SIGNAL-TO-NOISE RATIO (SNR) COMPUTATION IN RATE ADAPTATION

A method to regulate a signal-to-noise ratio (SNR) in a rate adaptation includes: transmitting a frame; determining a status of the transmitted frame; computing a probability of a channel of the transmitted frame being in an idle mode; computing an SNR offset based on the status of the transmitted frame and the probability; and regulating an SNR for transmission, based on the SNR offset.

Method and system for controlling messages between communicating entities

A method for controlling messages between communicating entities (CE) having computing devices, each CE sending messages to other neighboring CE with a entity-dependent message rate (CEMR), and with an entity-dependent transmission power, the messages being transmitted via one or more channels having a maximum channel capacity, and the CEMR defining a rate interval between a minimum and maximum rate, includes determining the CEMR within the rate interval by: (a) using a utility function for each CE; b) assigning an initial price for each CE; (c) adjusting the CEMR of each CE accounting for received prices of other CE; (d) computing a new price for each CE based on difference between initial price and available channel load for respective CEs; and (e) checking a termination condition for the difference and if unfulfilled, use the new price as initial price and repeat (c)-(e) until a termination condition is fulfilled.

Link speed downshifting for error determination and performance enhancements

Various embodiments for regulating link speed for performance enhancement and port diagnosis are provided. In response to identifying an amount of errors in a communications link above a predetermined threshold, an applicable transmission speed is selectively reduced. If errors identified at the reduced transmission speed are found to decrease, a communications port incorporating the communications link is flagged as potentially dirty, and if the errors identified at the reduced transmission speed are found to remain constant, the communications port is flagged as potentially bad.

NETWORK OPTIMIZATION AND CLIENT STEERING BASED ON WIRELESS DATA RATE CAPABILITIES OF MIXED CLIENT DEVICES
20170289837 · 2017-10-05 ·

A wireless local area network (WLAN) access point may receive a steering policy from a WLAN controller, the steering policy matching various data rate capabilities to various quality of service (QoS) levels. When a client device attempts to connect to the access point (AP), the AP responds via a default virtual access point (VAP) so that the client device transmits its client data rate capability to the AP via association request. The AP then checks the steering policy and either allows the connection to the default VAP if the QoS of the default VAP matches the client data rate or identifies a second VAP (which the AP may generate if it doesn't already exist) whose QoS does match the client data rate. The AP may then initiate WLAN communications between the client device and the matching VAP. Client devices with higher data rate capabilities may thus receive higher priority.

FACILITATING COMMUNICATION OF DATA PACKETS USING CREDIT-BASED FLOW CONTROL
20170289066 · 2017-10-05 ·

Apparatuses and methods are described that provide for credit based flow control in a network in which a public buffer is supported at a receiver node, where a transmitter node can control the use of the public buffer. In particular, the transmitter node determines a buffer credit value (TCRi) for each virtual lane of the transmitter node. The buffer credit value (TCRi) is negative (e.g., less than 0) in an instance in which a respective virtual lane private buffer is fully used and thus reflects a loan of credits from the public buffer. In addition, the transmitter node knows the needed buffer size per virtual lane for transmitting a packet in advance based on the round trip time (RTT) and maximum transmission unit (MTU) for the packet and is precluded from consuming more space on the public buffer than required to meet RTT.

MULTI-TAGGED MULTI-TENANT RATE LIMITING
20170289053 · 2017-10-05 ·

A rate limiting module receives a first request at a first time that comprises a first tag associated with a first attribute and a second tag associated with a second attribute. A second request is received at a second time that occurs after the first time that includes the first tag and the second tag. Responsive to determining that the second request violates a first rate limit for the first attribute, the rate limiting module rejects the second request. A third request is received at a third time that occurs after the second time that includes the first tag and the second tag. The rate limiting module determines that the third request violates a second rate limit for the second attribute, determines that the second rate limit is to be bypassed, and forwards the third request.

Methods and apparatus for throttling unattended applications at user devices
09781628 · 2017-10-03 · ·

Aspects of the present disclosure provide methods, apparatus and computer program products for throttling unattended applications at user devices (e.g., in an effort to limit transmission resource consumption by a user equipment (UE)). According to an aspect, the UE may receive an indication to restrict (throttle down) flow for traffic that appears to be unattended by a user. The UE may determine if a particular application is subject to flow restriction; and restrict flow of uplink traffic generated by the application, if the application is subject to flow restriction. Numerous other aspects are provided.

System and method for cache management

A method, computer program product, and computing system for processing one or more data chunks on a host server. The one or more data chunks are destined for storage within a portion of a data array coupled to the host server. The one or more data chunks are stored within a host cache system included within the host server. Storage criteria concerning the portion of a data array is reviewed. The storage criteria includes an array bandwidth allotment that defines a maximum bandwidth between the host server and the portion of the data array. The one or more data chunks are written to the portion of the data array based, at least in part, upon the storage criteria.

MACHINE-LEARNING OPTIMIZATION FOR COMPUTING NETWORKS
20170250875 · 2017-08-31 ·

A machine-learning optimization of a plurality of networks is provided. The machine-learning optimization includes interconnecting an online platform providing a machine learning module, a core network of computers deploying novel software, and a plurality of Internet network service providers. The platform collects, via the software, performance data of the Internet networks, which the machine learning module utilizes to enhance performance and reduce the latency therein networks by taking into account thousands of real-time and historic latency and bandwidth metrics. Thereby the software continually selects an optimal path through the plurality of Internet networks.