Patent classifications
H04L12/865
SYSTEM AND METHOD OF LOAD BALANCING ACROSS A MULTI-LINK GROUP
A method and apparatus of a device that queues an out-of-order packet received on a multi-link group is described. In an exemplary embodiment, the device receives a packet on a link of the multi-link group of a network element, where the packet is part of a data flow. The device further examines the packet, if the packet is associated with a re-orderable route. In addition, the device examines the packet by retrieving a packet sequence number from the packet and comparing the packet sequence number with the last received sequence number for this data flow. The device transmits the packet if the packet is a next packet in the data flow. If the packet is out-of-order, the device queues the packet.
METHOD FOR TRAFFIC SHAPING OF DATA FRAMES IN NETWORK AND DEVICE AND COMPUTER PROGRAM PRODUCT THEREFOR
The present invention relates to packet-switched networks, such as Ethernet, and more particularly to a method for traffic shaping of data frames to transmit in such a telecommunication network, the frames to transmit being distinguished between: express frames, needing to be sent within predetermined time windows, and normal frames, intended to be sent at times outside said time windows. More particularly, for a current normal frame, the method comprises the steps of: determining whether said normal frame can be fragmented, and if yes: determining whether a remaining time to a next time window opening is enough to transmit one or several fragments of said normal frame, and if yes: transmitting said one or several fragments.
Method and system for enforcing multiple rate limits with limited on-chip buffering
The present application describes a system and method for rate limiting traffic of a virtual machine (VM). In this regard, a VM bypasses a hypervisor and enqueues a packet on an assigned transmission queue. Based on information contained in the packet, the NIC determines whether the packet is to be delayed or transmitted immediately. If the NIC determines that the packet is to be transmitted immediately, the packet is moved to one of a plurality of primary output queues to be transmitted to the external network. If the packet is to be delayed, the packet is moved to one of a plurality of rate limited secondary output queues. In this regard, the NIC classifies the packets, thereby improving performance by allowing high-rate flows to bypass the hypervisor.
Data communications network for an aircraft
A method for servicing multiple data queues in communication with a data communications network. The multiple data queues may receive data of differing priority and/or the data queues may be arranged for data with a predetermined priority. The data in the data queues may be serviced by the same processor. A schedule may be applied to the data in the data queues to control the servicing the data in the data queues.
Transmission device and transmission method
A transmission device includes: a first counter; a counter control unit configured to increment the first counter at a specified rate; a frame buffer configured to store a received frame; and a buffer control unit configured to read a frame from the frame buffer when a value of the first counter is larger than a specified threshold and output the frame. When a length of an output frame read from the frame buffer by the buffer control unit is shorter than a specified reference frame length, the counter control unit decrements the first counter by a value indicating the reference frame length. When the length of the output frame is longer than or equal to the reference frame length, the counter control unit decrements the first counter by a value indicating the length of the output frame.
NETWORK FLOW CONTROL
Aspects of the present disclosure include a content delivery network (CDN) for delivering content associated with a plurality of different types of applications/devices. Using a CDN flow application, a plurality of network flow parameters are generated for content delivery unique to different types of applications or devices. The network flow parameters include customized data transmission rates. The network flow parameters include predetermined settings for transmission control protocol (TCP) connections between the CDN and devices using a TCP flow control mechanism. Upon receiving a content request, the CDN fulfills the content request based upon first network flow parameters. The network flow parameters may be adjusted for each of the plurality of different types of applications/devices. The network flow parameters may be generated based upon requests or based upon the performance of each of the plurality of applications/devices.
MANAGING NETWORK TRAFFIC
Examples relate to managing network traffic. In one example, a computing device may: receive voice network traffic from each of a plurality of voice clients; enqueue the received voice network traffic into a voice Wi-Fi Multimedia (WMM) queue; determine a measure of WMM queue utilization based on data queued in the voice WMM queue; determine a measure of radio congestion for a surrounding area; and determine, based on the measure of WMM queue utilization and the measure of radio congestion, to: stop prioritization of newly received voice traffic from new voice clients, or transition at least one of the plurality of voice clients to a neighboring computing device.
Techniques for enabling packet prioritization without starvation in communications networks
A method is provided in one example embodiment and includes determining whether a packet received at a network node in a communications network is a high priority packet; determining whether a low priority queue of the network node has been deemed to be starving; if the packet is a high priority packet and the low priority queue has not been deemed to be starving, adding the packet to a high priority queue, wherein the high priority queue has strict priority over the low priority queue; and if the packet is a high priority packet and the low priority queue has been deemed to be starving, adding the packet to the low priority queue.
Flow-based adaptive private network with multiple WAN-paths
Systems and techniques are described which improve performance, reliability, and predictability of networks without having costly hardware upgrades or replacement of existing network equipment. An adaptive communication controller provides WAN performance and utilization measurements to another network node over multiple parallel communication paths across disparate asymmetric networks which vary in behavior frequently over time. An egress processor module receives communication path quality reports and tagged path packet data and generates accurate arrival times, send times, sequence numbers and unutilized byte counts for the tagged packets. A control module generates path quality reports describing performance of the multiple parallel communication paths based on the received information and generates heartbeat packets for transmission on the multiple parallel communication paths if no other tagged data has been received in a predetermined period of time to ensure performance is continually monitored. An ingress processor module transmits the generated path quality reports and heartbeat packets.
Controlling notification based on power expense and social factors
In one embodiment, a computer system receives an indication of a power state of a client device, identifies messages to be sent to the client device, determines a transmit cost and a value of each of the messages, and stores at least one of the messages in a queue based on the transmit cost and value of the message.