H04L47/6265

Port-based fairness protocol for a network element

Methods, apparatuses, and computer-readable medium for providing a fairness protocol in a network element are disclosed herein. An example method includes receiving one or more packets at each of a plurality of ingress ports of the network element, and scheduling the packets into a plurality of queues, wherein each of the queues is associated with packets that are sourced from one of the ingress ports. The method also includes monitoring a bandwidth of traffic sourced from each of the ingress ports, identifying a port among the ingress ports that sources a smallest bandwidth of traffic, and arbitrating among the queues when transmitting packets from an egress port of the network element by giving precedence to the identified port that sources the smallest bandwidth of traffic. Additionally, arbitrating among the queues distributes a bandwidth of the egress port equally among the ingress ports.

METHOD FOR PACKET SCHEDULING USING MULTIPLE PACKET SCHEDULERS

A method comprising: receiving, by a first network packet scheduler, from each other network packet scheduler of a plurality of network packet schedulers, a virtual packet for each traffic class of a plurality of traffic classes defining relative transmission priority of network packets; receiving, by the first network packet scheduler, a network packet of a first traffic class of the plurality of traffic classes; transmitting, by the first network packet scheduler, each virtual packet into a virtual connection of a plurality of virtual connections created for each traffic class; scheduling, by the first network packet scheduler, a network packet or a virtual packet as a next packet in a buffer for transmission; determining, by the first network packet scheduler, that the next packet in the buffer is a virtual packet; and discarding, by the first network packet scheduler, the virtual packet, responsive to the determination that the next packet in the buffer is a virtual packet.

COFLOWS FOR GEO-DISTRIBUTED COMPUTER SITES THAT COMMUNICATE VIA WIDE AREA NETWORK

A coflow is mapped to a plurality of geo-distributed computer sites that can communicate via wide area network (WAN), where the mapping is subject to one or more location-dependent constraints. Multiple candidate data paths are identified for each of a plurality of source-destination pairs of the plurality of geo-distributed computer sites. A mathematical optimization is performed to find a set of paths from the candidate data paths based on total flow completion time and at least one additional objective of the coflow.

Mobility network slice selection

Core network slices that belong to a given operator community are efficiently tracked at the network control/user plane functions level, with rich data analytics in real-time based on their geographic instantiations. In one aspect, an enhanced vendor agnostic orchestration mechanism is utilized to connect a unified management layer with an integrated slice-components data analytics engine (SDAE), a slice performance engine (SPE), and a network slice selection function (NSSF) in a closed-loop feedback system with the serving network functions of one or more core network slices. The tight-knit orchestration mechanism provides economies of scale to mobile carriers in optimal deployment and utilization of their critical core network resources while serving their customers with superior quality.

Method and apparatus for bandwidth allocation in a sliced network

In one embodiment, the apparatus includes at least one memory configured to store instructions; and at least one processor configured to execute the instructions and cause the apparatus to perform, obtaining, a first parameter indicating a contention situation of a first network including a plurality of virtual network operators, VNOs, as participants; obtaining, a second parameter indicating a historical bandwidth utilization of respective one of the VNOs; determining, based on the first parameter and the second parameter, a first scheduler parameter and/or a first shaper parameter, for being provided to an output of the apparatus, wherein the first scheduler parameter and/or the first shaper parameter is related to allocating bandwidth to the one of the VNOs; transmitting, to a controller of the one of the VNOs, the first scheduler parameter and/or the first shaper parameter.

FLOW CONTROL METHOD, APPARATUS, AND ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM
20250337695 · 2025-10-30 ·

Proposed is a flow control method, including: calculating a first quantity of allocable bandwidth resources corresponding to user sets with different priorities respectively in a current resource allocation cycle according to a quantity of bandwidth resources allocated to the user sets with different priorities in a previous resource allocation cycle; further calculating a second quantity of allocable bandwidth resources corresponding to each user in the user sets with different priorities according to the first quantity; and in response to an access request for data initiated by a user received in the current resource allocation cycle, performing access control for the access request based on the second quantity of the allocable bandwidth resources corresponding to the user. In the above process, bandwidth resources can be dynamically adjusted, the bandwidth resources can be rationally allocated, and precise flow control can be achieved to reduce an affected scope when physical resources are limited.

Queue bandwidth estimation for management of shared buffers and allowing visibility of shared buffer status

A network device includes a memory and a memory management circuit. The memory is to store a shared buffer. The memory management circuit is to estimate respective bandwidth measures for one or more queues used in processing packets in the network device, and to allocate and deallocate segments of the shared buffer to at least one of the queues based on the bandwidth measures.