Patent classifications
H04L49/501
USER NOTIFICATION OF CELLULAR SERVICE IMPAIRMENT
When a user of a mobile device attempts to use an application having high data demands, the mobile device queries its cellular data provider to determine currently available data transfer rates, based on the geographic location of the mobile device and the demands on the cellular base station from which the mobile device is being served. If the base station is experiencing data congestion, which is likely to result in a less than optimum user experience, the mobile device displays a warning to the user suggesting that the user postpone usage of the application or try using the application in a different geographic location that is experiencing less congestion.
Supporting multiple virtual switches on a single host
System and method for supporting multiple vSwitches on a single host server. In one aspect, embodiments according to the present disclosure include a system and method for supporting multiple vSwitches on a single host server. In one aspect, a set of packet processor threads are instantiated to process data packets on behalf of all vSwitches deployed on the host server. For a data packet received at a port of the host server, a packet processor determines the datapath based on a mapping table and processes the packet according to the rules defined for that datapath. In one aspect, ports (physical and/or virtual) are able to be configured to specified vSwitches dynamically.
User notification of cellular service impairment
When a user of a mobile device attempts to use an application having high data demands, the mobile device queries its cellular data provider to determine currently available data transfer rates, based on the geographic location of the mobile device and the demands on the cellular base station from which the mobile device is being served. If the base station is experiencing data congestion, which is likely to result in a less than optimum user experience, the mobile device displays a warning to the user suggesting that the user postpone usage of the application or try using the application in a different geographic location that is experiencing less congestion.
Method and System for Balancing Storage Data Traffic in Converged Networks
Methods for balancing storage data traffic in a system in which at least one computing device (server) coupled to a converged network accesses at least one storage device coupled (by at least one adapter) to the network, systems configured to perform such methods, and devices configured to implement such methods or for use in such systems. Typically, the system includes servers and adapters, and server agents implemented on the servers and adapter agents implemented on the adapters are configured to detect and respond to imbalances in storage and data traffic in the network, and to redirect the storage data traffic to reduce the imbalances and, thereby to improve the overall network performance (for both data communications and storage traffic). Typically, each agent operates autonomously (except in that an adapter agent may respond to a request or notification from a server agent), and no central computer or manager directs operation of the agents.
Blind mobility load balancing between source and target cells
An example method is provided and includes monitoring a plurality of neighboring nodes to detect attempts to perform mobility load balancing by the neighboring nodes; detecting an attempt to perform mobility load balancing by at least one neighboring node; determining an identity of the at least one of the neighboring nodes; and setting an offset parameter for the identified neighboring node(s) to an optimal value. The monitoring may include monitoring a number of handover requests received within a predetermined time period. The detecting may include detecting that the number of handover requests received within the predetermined time period exceeds a predetermined number. The determining may include determining a source of each of the received handover requests, which may be accomplished by examining a user equipment history included in each of the received handover requests or by performing an iterative elimination process with respect to the neighboring nodes.
PACKET BUFFER LATENCY MITIGATION IN A NETWORK DEVICE
A network device includes a plurality of network interfaces and an ingress processor configured to process packets received by the network device to determine network interfaces, among the plurality of network interfaces, via which the packets are to be transmitted by the network device. The network device also includes a memory device configured to buffer packet data corresponding to the packets while the packets are being processed by the network device and a memory controller configured to select a buffering scheme for buffering a packet in the memory device based on a congestion state of a network interface via which the packet is to be transmitted. The buffering scheme is selected among a first buffering scheme having a first latency associated with buffering packet data and a second buffering scheme having a second latency, smaller than the first latency, associated with buffering packet data.
Dynamically reconfiguring data plane of forwarding element to account for power consumption
Some embodiments of the invention provide a network forwarding element that can be dynamically reconfigured to adjust its data message processing to stay within a desired operating temperature or power consumption range. In some embodiments, the network forwarding element includes (1) a data-plane forwarding circuit (data plane) to process data tuples associated with data messages received by the IC, and (2) a control-plane circuit (control plane) for configuring the data plane forwarding circuit. The data plane includes several data processing stages to process the data tuples.
Low overhead error correction code
Memory requests are protected by encoding memory requests to include error correction codes. A subset of bits in a memory request are compared to a pre-defined pattern to determine whether the subset of bits matches a pre-defined pattern, where a match indicates that a compression can be applied to the memory request. The error correction code is generated for the memory request and the memory request is encoded to remove the subset of bits, add the error correction code, and add at least one metadata bit to the memory request to generate a protected version of the memory request, where the at least one metadata bit identifies whether the compression was applied to the memory request.