H04L47/6295

QUEUE SCHEDULING METHOD, APPARATUS, AND SYSTEM
20230155954 · 2023-05-18 ·

A queue scheduling method, apparatus, and system are provided, to flexibly manage a queue, meet an actual transmission requirement, and reduce resources. The queue scheduling method implemented by a processing apparatus includes: generating an HQoS scheduling tree including a plurality of leaf nodes, each of which identifies a queue on a traffic management (TM) hardware entity including a plurality of queues; obtaining traffic characteristics of the plurality of queues based on the plurality of leaf nodes; determining a scheduling parameter of at least one queue in the plurality of queues based on the traffic characteristics which are of data flows transmitted by the plurality of queues; sending to a scheduling apparatus a scheduling message corresponding to the at least one queue in the TM hardware entity, including the scheduling parameter of the at least one queue used to schedule the at least one queue.

QUEUE SCHEDULING METHOD, APPARATUS, AND SYSTEM
20230155954 · 2023-05-18 ·

A queue scheduling method, apparatus, and system are provided, to flexibly manage a queue, meet an actual transmission requirement, and reduce resources. The queue scheduling method implemented by a processing apparatus includes: generating an HQoS scheduling tree including a plurality of leaf nodes, each of which identifies a queue on a traffic management (TM) hardware entity including a plurality of queues; obtaining traffic characteristics of the plurality of queues based on the plurality of leaf nodes; determining a scheduling parameter of at least one queue in the plurality of queues based on the traffic characteristics which are of data flows transmitted by the plurality of queues; sending to a scheduling apparatus a scheduling message corresponding to the at least one queue in the TM hardware entity, including the scheduling parameter of the at least one queue used to schedule the at least one queue.

VIRTUAL DUAL QUEUE CORE STATELESS ACTIVE QUEUE MANAGEMENT (AGM) FOR COMMUNICATION NETWORKS
20230142425 · 2023-05-11 ·

A method for handling data packets by a communication node in a communication network, the method comprising storing received data packets in at least two physical queues, wherein a first of said at least two physical queues is associated with low latency data packets and a second of said at least two physical queues is associated with high latency data packets, wherein each data packet is stored in one of the at least two physical queues based on a delay characteristic associated with the data packet, for each received data packet, storing an associated information record in at least two virtual queues, VQs, wherein associated information for data packets stored in said high latency physical queue is stored in a second of said at least two virtual queues and wherein associated information for data packets stored in said low latency physical queue is stored in both said first and second of said at least two virtual queues, serving data packets from the at least two physical queues, using at least two Congestion Threshold Values, CTVs, wherein a first of said at least two CTVs is applicable to data packets in said low latency physical queue and wherein both said first and second of said at least two CTVs are applicable to data packets in said low latency physical queue and data packets in said high latency physical queue, wherein said at least two CTVs are used for at least one of dropping and marking packets based on their associated information.

VIRTUAL DUAL QUEUE CORE STATELESS ACTIVE QUEUE MANAGEMENT (AGM) FOR COMMUNICATION NETWORKS
20230142425 · 2023-05-11 ·

A method for handling data packets by a communication node in a communication network, the method comprising storing received data packets in at least two physical queues, wherein a first of said at least two physical queues is associated with low latency data packets and a second of said at least two physical queues is associated with high latency data packets, wherein each data packet is stored in one of the at least two physical queues based on a delay characteristic associated with the data packet, for each received data packet, storing an associated information record in at least two virtual queues, VQs, wherein associated information for data packets stored in said high latency physical queue is stored in a second of said at least two virtual queues and wherein associated information for data packets stored in said low latency physical queue is stored in both said first and second of said at least two virtual queues, serving data packets from the at least two physical queues, using at least two Congestion Threshold Values, CTVs, wherein a first of said at least two CTVs is applicable to data packets in said low latency physical queue and wherein both said first and second of said at least two CTVs are applicable to data packets in said low latency physical queue and data packets in said high latency physical queue, wherein said at least two CTVs are used for at least one of dropping and marking packets based on their associated information.

Adaptive flow prioritization

A method for communication includes receiving and forwarding packets in multiple flows to respective egress interfaces of a switching element for transmission to a network. For each of one or more of the egress interfaces, in each of a succession of arbitration cycles, a respective number of the packets in each of the plurality of the flows that are queued for transmission through the egress interface is assessed, and the flows for which the respective number is zero are assigned to a first group, while the flows for which the respective number is non-zero are assigned to a second group. The received packets that have been forwarded to the egress interface and belong to the flows in the first group are transmitted with a higher priority than the flows in the second group.

Arbitration of multiple-thousands of flows for convergence enhanced ethernet

In one embodiment, a method includes selecting a flow from a head of a first control queue or a second control queue. The method also includes providing service to the selected flow. Moreover, the method includes decreasing a service credit of the selected flow by an amount corresponding to an amount of service provided to the selected flow. In another embodiment, a computer program product includes a computer readable storage medium having program code embodied therewith. The embodied program code is readable/executable by a device to select, by the device, a flow from a head of a first control queue or a second control queue. The embodied program code is also readable/executable to provide, by the device, service to the selected flow, and decrease, by the device, a service credit of the selected flow by an amount corresponding to an amount of service provided to the selected flow.

Method for operating a first network device, first network device, and method for operating a communications network

A method for operating a first network device of a communications network, wherein the method comprises: determining or receiving, by means of an ingress interface, an ingress data stream comprising payload data to be transmitted towards a second network device; determining or receiving, by means of a correlation observer, at least one correlation value for a plurality of communication paths between the first network device and the second network device, wherein each of the plurality of communication paths comprises a different one of a plurality of physical channels; determining, by means of a multi-connectivity controller, a plurality of egress data streams in dependence on the at least one correlation value and in dependence on the ingress data stream; and transmitting, via a respective one of a plurality of egress queues, wherein each egress data stream is associated with a different one of the plurality of paths.

Data flow processing method, electronic device, and storage medium

In a data flow processing method, multiple data flow queues are obtained. Each of the multiple data flow queues includes one or more data flow sub-queues. A priority of each of the one or more data flow sub-queues is determined. The multiple data flow queues are integrated to a target data flow queue according to the priority. The target data flow queue is sent to a target switch. The method integrates data flows as far as possible and improves the efficiency of data flow scheduling.

Data flow processing method, electronic device, and storage medium

In a data flow processing method, multiple data flow queues are obtained. Each of the multiple data flow queues includes one or more data flow sub-queues. A priority of each of the one or more data flow sub-queues is determined. The multiple data flow queues are integrated to a target data flow queue according to the priority. The target data flow queue is sent to a target switch. The method integrates data flows as far as possible and improves the efficiency of data flow scheduling.

Apparatus for managing data queues in a network

An apparatus for managing data queues is disclosed. The apparatus includes at least one sensor for collecting data, a data interface for receiving data from the sensor(s) and for placing the collected data in a set of data queues, and a priority sieve for organizing the set of data queues according to data priority of a specific task. The priority sieve includes a scoreboard for identifying queue priority and a system timer for synchronization.