Patent classifications
H04L47/626
Method and system for storing packets for a bonded communication links
Method and system for storing packets received from a bonded communication links according to latency of the communication link that has the largest latency among all communication links of the bonded communication links. Embodiments of present inventions can be applied to bonded communication links, including wireless connection, Ethernet connection, Internet Protocol connection, asynchronous transfer mode, virtual private network, WiFi, high-speed downlink packet access, GPRS, LTE, and X.25. The present invention presents methods comprising the steps of estimating storage size of a queue, wherein the queue is for storage the one or more packets received from the bonded communication links. The storage size is based on one or more factors, including largest latency, bandwidth of each of the plurality of communication links, and allowed time duration of packet storage
Monitoring and Surveillance System Arranged for Processing Video Data Associated with a Vehicle, as well as Corresponding Devices and Method
A monitoring and surveillance system arranged for processing video data associated with a vehicle, wherein said system is arranged to operate in at least two operating modi, a first modus of said two modi being associated with a first latency requirement for said video data and a second modus of said two modi being associated with a second latency requirement, said system comprising a camera unit, arranged to be installed in said vehicle, wherein said camera unit is arranged for capturing video data; a streaming unit, arranged to be installed in said vehicle, and arranged for receiving said video data and for transmitting said video data over a telecommunication network to a video processing server; said video processing server arranged for selecting a modus of said at least two operating modi, and for communicating said selected modus, over said telecommunication network, to said camera unit such that said streaming unit can be tuned to said selected modus. Complementary systems and methods are also presented herein.
SYSTEM AND METHOD FOR FACILITATING EFFICIENT PACKET FORWARDING IN A NETWORK INTERFACE CONTROLLER (NIC)
A network interface controller (NIC) capable of efficient packet forwarding is provided. The NIC can be equipped with a host interface, a packet generation logic block, and a forwarding logic block. During operation, the packet generation logic block can obtain, via the host interface, a message from the host device and for a remote device. The packet generation logic block may generate a plurality of packets for the remote device from the message. The forwarding logic block can then send a first subset of packets of the plurality of packets based on ordered delivery. If a first condition is met, the forwarding logic block can send a second subset of packets of the plurality of packets based on unordered delivery. Furthermore, if a second condition is met, the forwarding logic block can send a third subset of packets of the plurality of packets based on ordered delivery.
Flow-based adaptive private network with multiple WAN-paths
Systems and techniques are described which improve performance, reliability, and predictability of networks without having costly hardware upgrades or replacement of existing network equipment. An adaptive communication controller provides WAN performance and utilization measurements to another network node over multiple parallel communication paths across disparate asymmetric networks which vary in behavior frequently over time. An egress processor module receives communication path quality reports and tagged path packet data and generates accurate arrival times, send times, sequence numbers and unutilized byte counts for the tagged packets. A control module generates path quality reports describing performance of the multiple parallel communication paths based on the received information and generates heartbeat packets for transmission on the multiple parallel communication paths if no other tagged data has been received in a predetermined period of time to ensure performance is continually monitored. An ingress processor module transmits the generated path quality reports and heartbeat packets.
Link aggregation based on estimated time of arrival
The present disclosure relates to a communication arrangement (110, 130) adapted for link aggregation of a plurality of communication links (120a, 120b, 120c), comprised in an Aggregation Group, AG, (121). The communication arrangement (110, 130) is adapted to communicate via the plurality of communication links (120a, 120b, 120c) and comprises a traffic handling unit (112, 132) that is adapted to obtain data segments (414-423) to be transmitted, and to determine a risk of re-ordering of data segments within a certain data flow (401, 404) comprising a certain data segment (416, 417; 421). Said risk is associated with transmitting said certain data segment via a certain communication link out of the plurality of communication links (120a, 120b, 120c). The traffic handling unit (112, 132) is furthermore adapted to buffer said certain data segment (416, 417; 421) until the risk of re-ordering satisfies a predetermined criteria, prior to transmitting the said certain data segment (416, 417; 421) via the selected communication link.
SYSTEMS AND METHODS FOR LATENCY REDUCTION USING MAP STAGGERING
A scheduling unit is provided for managing upstream message allocation in a communication network. The scheduling unit includes a processor configured to determine (i) a number of channels communicating in one direction stream of the communication network, and (ii) a MAP interval duration of the communication network. The scheduling unit further includes a media access control (MAC) domain configured to (i) calculate a staggered allocation start time for each separate channel of the number of channels, and (ii) assign a different allocation start time, within the MAP interval duration, to each separate channel.
MANAGING DATA FLOW BETWEEN SOURCE NODE AND RECIPIENT NODE
There is provided managing a data flow between a source node and a recipient node. A method comprises storing, at the source node, data frames into a buffer for transmission to the recipient node over a host-to-host protocol connection; measuring, at the source node, a connection quality of the host-to-host protocol connection; adjusting, at the source node, one or more target parameters of the transmission on the basis of the measured connection quality; transmitting, by the source node, data frames from the buffer to the recipient node on the basis of a Last-In, First-Out (LIFO) method and the adjusted one or more target parameters.
System and method for using soft lock with virtual channels in a network-on-chip (NoC)
A system and method for soft locking on an ingress port of a networking device in a network, such as a network-on-chip (NoC). Once a soft lock is established, the port is given transmitting priority so long has the port has an available packet or packet parts that can make forward progress in the network. When the soft lock port's packet parts, which can make forward progress in the network, are not available, the networking device may choose another port. The system transmits packet parts from the other port until the soft locked port has packet parts available that can make forward progress in the network. Any arbitration scheme may be used to select the port that is soft locked and to select the other ports to transmit from when the soft locked port does not have packet parts that can make forward progress in the network. Once the packet (or all the packet parts) on the soft locked port has completed transmission, the soft lock of the soft locked port is released.
Systems and methods for queue protection
A scheduling device for managing a packet queue of a communication gateway includes a receiving portion configured to receive data packets according to at least one communication protocol, a processor, and classification module configured to separate the received data packets into a first traffic queue and a second traffic queue separate from the first traffic queue. The first traffic queue includes a low latency service flow classified to have strict priority. The second traffic queue includes a primary service flow classified as having a classic priority. The classification module separates the received data packets so that those with a first indicator are sent to the first traffic queue and those without the first indicator are sent to the second traffic queue.
Channelized rate adaptation
Apparatus and method relating generally to a channelized communication system is disclosed. In such a method, a read signal and a switch control signal are generated by a controller. Received by channelized buffers are data words from multiple channels associated with groups of information and the read signal. The data words are read out from the channelized buffers responsive to the read signal. A switch receives the data words from the channelized buffers responsive to the read signal. A gap is inserted between the groups of information by the switch. One or more control words are selectively inserted in the gap by the switch responsive to the switch control signal. The switch control signal has indexes for selection of the data words and the control words.