Patent classifications
H04L47/626
Systems and methods for latency reduction using map staggering
A scheduling unit is provided for managing upstream message allocation in a communication network. The scheduling unit includes a processor configured to determine (i) a number of channels communicating in one direction stream of the communication network, and (ii) a MAP interval duration of the communication network. The scheduling unit further includes a media access control (MAC) domain configured to (i) calculate a staggered allocation start time for each separate channel of the number of channels, and (ii) assign a different allocation start time, within the MAP interval duration, to each separate channel.
METHOD AND SYSTEM FOR FACILITATING LOSSY DROPPING AND ECN MARKING
Methods and systems are provided for performing lossy dropping and ECN marking in a flow-based network. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow are acknowledged after reaching the egress point of the network, and the acknowledgement packets are sent back to the ingress point of the flow along the same data path. As a result, each switch can obtain state information of each flow and perform per-flow packet dropping and ECN marking.
QoS management for multi-user and single user EDCA transmission mode in wireless networks
A communication method in a communication network comprising a plurality of nodes, at least one node comprising a plurality of traffic queues for serving data traffic at different priorities, each traffic queue being associated with a respective queue backoff value computed from respective queue contention parameters having first and second values in, respectively, a first and a second contention modes, obtaining quality of service requirements of data stored in a traffic queue of the node; checking whether the quality of service requirements can be fulfilled when accessing the communication channel using the second contention mode; if the requirements cannot be fulfilled as the result of the checking, disabling access to resource units provided by the other node within one or more transmission opportunities granted to the other node on the communication channel; and transmitting data stored in the traffic queue using the first contention mode.
BANDWIDTH ALLOCATION
An optical line terminal is disclosed. The optical line terminal comprises at least one processor; and at least one memory including machine-readable instructions. The at least one memory and the machine-readable instructions are configured to, with the at least one processor, cause the optical line terminal to determine based on one or more variables a relationship between bandwidth efficiency and latency for communication of contents of a queue buffer of an optical network unit with the optical line terminal via an optical distribution network, and determine a burst schedule for the queue buffer based on the determined relationship.
Managing data flow between source node and recipient node
There is provided managing a data flow between a source node and a recipient node. A method comprises storing, at the source node, data frames into a buffer for transmission to the recipient node over a host-to-host protocol connection; measuring, at the source node, a connection quality of the host-to-host protocol connection; adjusting, at the source node, one or more target parameters of the transmission on the basis of the measured connection quality; transmitting, by the source node, data frames from the buffer to the recipient node on the basis of a Last-In, First-Out (LIFO) method and the adjusted one or more target parameters.
BUFFER DETERMINING METHOD AND APPARATUS
This application provides a buffer determining method and apparatus, to resolve a problem of how a terminal device on a sidelink calculates a buffer size. The method includes: A terminal device determines a sidelink data rate, and determines a buffer size based on the sidelink data rate. In this embodiment of this application, the terminal device may determine the buffer size based on the sidelink data rate, to calculate a buffer size of terminal device in sidelink communication.
Link-based autonomous cell scheduling device and method for improved traffic throughput in TSCH protocol
Disclosed is a link-based autonomous cell scheduling device including: a routing information manager that records and manages information on a node's preferred parent node and child nodes in a routing information table; a slot frame manager that generates and modifies a number of slot frames by referring to routing information in the routing information table, packet queue information, and transmission/reception result information; a slot frame schedule determiner that integrates a number of slot frames to generate one integrated slot frame corresponding to a global slot frame size and record the same in an integrated slot frame table, by referring to a link unicast slot frame table, a broadcast slot frame table, and an EB slot frame table; and a TSCH MAC layer driver that operates after checking for a cell assigned to the integrated slot frame by referring to the integrated slot frame table at the TSCH MAC layer.
Monitoring and surveillance system arranged for processing video data associated with a vehicle, as well as corresponding devices and method
A monitoring and surveillance system arranged for processing video data associated with a vehicle, wherein said system is arranged to operate in at least two operating modi, a first modus of said two modi being associated with a first latency requirement for said video data and a second modus of said two modi being associated with a second latency requirement, said system comprising a camera unit, arranged to be installed in said vehicle, wherein said camera unit is arranged for capturing video data; a streaming unit, arranged to be installed in said vehicle, and arranged for receiving said video data and for transmitting said video data over a telecommunication network to a video processing server; said video processing server arranged for selecting a modus of said at least two operating modi, and for communicating said selected modus, over said telecommunication network, to said camera unit such that said streaming unit can be tuned to said selected modus. Complementary systems and methods are also presented herein.
Method for operating a first network device, first network device, and method for operating a communications network
A method for operating a first network device of a communications network, wherein the method comprises: determining or receiving, by means of an ingress interface, an ingress data stream comprising payload data to be transmitted towards a second network device; determining or receiving, by means of a correlation observer, at least one correlation value for a plurality of communication paths between the first network device and the second network device, wherein each of the plurality of communication paths comprises a different one of a plurality of physical channels; determining, by means of a multi-connectivity controller, a plurality of egress data streams in dependence on the at least one correlation value and in dependence on the ingress data stream; and transmitting, via a respective one of a plurality of egress queues, wherein each egress data stream is associated with a different one of the plurality of paths.
SYSTEM AND METHOD FOR FACILITATING DATA-DRIVEN INTELLIGENT NETWORK WITH INGRESS PORT INJECTION LIMITS
Data-driven intelligent networking systems and methods are provided. The system can accommodate dynamic traffic while applying injection limits to different traffic classes at an ingress edge port. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow can be acknowledged after reaching the egress point of the network, and the acknowledgement packets can be sent back to the ingress point of the flow along the same data path. Furthermore, an edge switch can dynamically allocate the ingress port bandwidth among the traffic classes that are active at a given moment.