H04L47/6265

Technologies for offloaded management of communication

Technologies for offloaded management of communication are disclosed. In order to manage communication with information that may be available to applications in a compute device, the compute device may offload communication management to a host fabric interface using a credit management system. A credit limit is established, and each message to be sent is added to a queue with a corresponding number of credits required to send the message. The host fabric interface of the compute device may send out messages as credits become available and decrease the number of available credits based on the number of credits required to send a particular message. When an acknowledgement of receipt of a message is received, the number of credits required to send the corresponding message may be added back to an available credit pool.

Controlling fair bandwidth allocation efficiently

Micro-schedulers control bandwidth allocation for clients, each client subscribing to a respective predefined portion of bandwidth of an outgoing communication link. A macro-scheduler controls the micro-schedulers, by allocating the respective subscribed portion of bandwidth associated with each respective client that is active, by a predefined first deadline, with residual bandwidth that is unused by the respective clients being shared proportionately among respective active clients by a predefined second deadline, while minimizing coordination among micro-schedulers by the macro-scheduler periodically adjusting respective bandwidth allocations to each micro-scheduler.

MOBILITY NETWORK SLICE SELECTION
20200052991 · 2020-02-13 ·

Core network slices that belong to a given operator community are efficiently tracked at the network control/user plane functions level, with rich data analytics in real-time based on their geographic instantiations. In one aspect, an enhanced vendor agnostic orchestration mechanism is utilized to connect a unified management layer with an integrated slice-components data analytics engine (SDAE), a slice performance engine (SPE), and a network slice selection function (NSSF) in a closed-loop feedback system with the serving network functions of one or more core network slices. The tight-knit orchestration mechanism provides economies of scale to mobile carriers in optimal deployment and utilization of their critical core network resources while serving their customers with superior quality.

Scalable ingress arbitration for merging control and payload

Approaches, techniques, and mechanisms are disclosed for improving the efficiency with which data units are handled within a device, such as a networking device. Received data units, or portions thereof, are temporarily stored within one or more memories of a merging component, while the merging component waits to receive control information for the data units. Once received, the merging component merges the control information with the associated data units. The merging component dispatches the merged data units, or portions thereof, to an interconnect component, which forwards the merged data units to destinations indicated by the control information. The device is configured to intelligently schedule the dispatching of merged data units to the interconnect component. To this end, the device includes a scheduler configured to select which merged data units to dispatch at which times based on a variety of factors described herein.

Determining rate differential weighted fair output queue scheduling for a network device

A network device may receive packets and may calculate, during a time interval, an arrival rate and a departure rate, of the packets, at one of multiple virtual output queues. The network device may calculate a current oversubscription factor based on the arrival rate and the departure rate, and may calculate a target oversubscription factor based on an average of previous oversubscription factors associated with the multiple virtual output queues. The network device may determine whether a difference exists between the target oversubscription factor and the current oversubscription factor and may calculate, when the difference exists, a scale factor based on the current oversubscription factor and the target oversubscription factor. The network device may calculate new scheduling weights based on prior scheduling weights and the scale factor, and may process packets received by the multiple virtual output queues based on the new scheduling weights.

MANAGING DATA TRAFFIC IN A SUBSCRIPTION-BASED NETWORK ACCESS SYSTEM

Described herein are systems and methods that dynamically manage network traffic for individual subscribers based on past and current data usage rates. The disclosed systems and methods operate to control data traffic for a group of subscribers that share a common access network or that share a common access link to an access network. Prior to an individual subscriber reaching their data plan limit, the disclosed systems and methods track individual subscribers' past and current data rates and manage individual subscribers' current usage rates so that each subscriber's continually or periodically updating past usage rate stays within a provisioning rate for the group. This can improve user experience because rather than waiting until a subscriber has reached their plan data limit to impose strict data usage restrictions, the disclosed systems and methods use modest restrictions continuously or intermittently during the plan period.

Method and apparatus for controlling scheduling packet
10389648 · 2019-08-20 · ·

Embodiments provide a method and an apparatus for controlling a scheduling packet. The method is applied to an HFC network system. The method includes: determining, by a network device, a transmission bandwidth of a first scheduling packet; determining a target quantity according to a first control threshold when the transmission bandwidth of the first scheduling packet is greater than or equal to the first control threshold, where the target quantity is less than or equal to a quantity of IEs included in the first scheduling packet; and generating a second scheduling packet according to the target quantity, where a quantity of IEs included in the second scheduling packet is less than the target quantity, the second scheduling packet includes an IE used to carry resource allocation information for a second uplink period, and the second uplink period follows the first uplink period.

METHOD AND APPARATUS FOR BANDWIDTH ALLOCATION IN A SLICED NETWORK

In one embodiment, the apparatus includes at least one memory configured to store instructions; and at least one processor configured to execute the instructions and cause the apparatus to perform, obtaining, a first parameter indicating a contention situation of a first network including a plurality of virtual network operators, VNOs, as participants; obtaining, a second parameter indicating a historical bandwidth utilization of respective one of the VNOs; determining, based on the first parameter and the second parameter, a first scheduler parameter and/or a first shaper parameter, for being provided to an output of the apparatus, wherein the first scheduler parameter and/or the first shaper parameter is related to allocating bandwidth to the one of the VNOs; transmitting, to a controller of the one of the VNOs, the first scheduler parameter and/or the first shaper parameter.

BIDIRECTIONAL DATA TRAFFIC CONTROL
20190190808 · 2019-06-20 ·

A system includes an egress apparatus communicatively coupled with an ingress apparatus via at least one bi-directional network connection established for a given site. Each of the ingress and egress apparatuses includes packet categorizer to categorize each of the egress data packets based on packet evaluation thereof with respect to prioritization rules. Packet routing control places each outgoing data packet (from the ingress or egress apparatus) in one of multiple according to the categorization of each respective packet to control sending the packets according to the priority of the respective queue into which each packet is placed.

Mobility network slice selection

Core network slices that belong to a given operator community are efficiently tracked at the network control/user plane functions level, with rich data analytics in real-time based on their geographic instantiations. In one aspect, an enhanced vendor agnostic orchestration mechanism is utilized to connect a unified management layer with an integrated slice-components data analytics engine (SDAE), a slice performance engine (SPE), and a network slice selection function (NSSF) in a closed-loop feedback system with the serving network functions of one or more core network slices. The tight-knit orchestration mechanism provides economies of scale to mobile carriers in optimal deployment and utilization of their critical core network resources while serving their customers with superior quality.