Patent classifications
H04L47/568
Internal packet steering within a wireless access gateway
In general, techniques are described for steering data traffic for a subscriber session from a network interface of a wireless access gateway to an anchoring one of a plurality of forwarding units of the wireless access gateway using a layer 2 (L2) address of the data traffic. For example, a wireless access gateway for a wireless local area network (WLAN) access network is described as having a decentralized data plane that includes multiple forwarding units for implementing subscriber sessions. Each forwarding unit may present a network interface for sending and receiving network packets and includes packet processing capabilities to enable subscriber data packet processing to perform the functionality of the wireless access gateway. The techniques enable steering data traffic for a given subscriber session to a particular one of the forwarding units of the wireless access gateway using an L2 address of the data traffic.
Shared traffic manager
A traffic manager is shared amongst two or more egress blocks of a network device, thereby allowing traffic management resources to be shared between the egress blocks. Schedulers within a traffic manager may generate and queue read instructions for reading buffered portions of data units that are ready to be sent to the egress blocks. The traffic manager may be configured to select a read instruction for a given buffer bank from the read instruction queues based on a scoring mechanism or other selection logic. To avoid sending too much data to an egress block during a given time slot, once a data unit portion has been read from the buffer, it may be temporarily stored in a shallow read data cache. Alternatively, a single, non-bank specific controller may determine all of the read instructions and write operations that should be executed in a given time slot.
Fine grain traffic shaping offload for a network interface card
A network interface card with traffic shaping capabilities and methods of network traffic shaping with a network interface card are provided. The network interface card and method can shape traffic originating from one or more applications executing on a host network device. The applications can execute in a virtual machine or containerized computing environment. The network interface card and method can perform or include several traffic shaping mechanisms including, for example and without limitation, a delayed completion mechanism, a time-indexed data structure, a packet builder, and a memory manager.
Online task dispatching and scheduling system and method thereof
The present disclosure relates to an online task dispatching and scheduling system. The system includes an end device; an access point (AP) configured to receive a task from the end device; one or more edge servers configured to receive the task from the AP, the one or more edge servers including a task waiting queue, a processing pool, a task completion queue, and a scheduler, wherein the AP further includes a dispatcher utilizing Online Learning (OL) for determining a real-time state of network conditions and server loads; and the AP selects a target edge server from the one or more edge servers to which the task is to be dispatched; and wherein the scheduler utilizes Deep Reinforcement Learning (DRL) in generating a task scheduling policy for the one or more edge servers.
Time Allocation For Network Transmission
Methods and systems for managing data transmissions are disclosed. An example method can comprise determining a plurality of time allocations for a time cycle. The plurality of time allocations can comprise a first time allocation which can be determined based on an information rate, a committed information rate, an excess information rate, an effective bandwidth rate, other factors, or a combination thereof. Data can be received from multiple sources into a buffer, for example, and can be processed within a time cycle if processing the data will not exceed the time allocation.
DELAY-BASED AUTOMATIC QUEUE MANAGEMENT AND TAIL DROP
Approaches, techniques, and mechanisms are disclosed for improving operations of a network switching device and/or network-at-large by utilizing queue delay as a basis for measuring congestion for the purposes of Automated Queue Management (“AQM”) and/or other congestion-based policies. Queue delay is an exact or approximate measure of the amount of time a data unit waits at a network device as a consequence of queuing, such as the amount of time the data unit spends in an egress queue while the data unit is being buffered by a traffic manager. Queue delay may be used as a substitute for queue size in existing AQM, Weighted Random Early Detection (“WRED”), Tail Drop, Explicit Congestion Notification (“ECN”), reflection, and/or other congestion management or notification algorithms. Or, a congestion score calculated based on the queue delay and one or more other metrics, such as queue size, may be used as a substitute.
Fine Grain Traffic Shaping Offload For A Network Interface Card
A network interface card with traffic shaping capabilities and methods of network traffic shaping with a network interface card are provided. The network interface card and method can shape traffic originating from one or more applications executing on a host network device. The applications can execute in a virtual machine or containerized computing environment. The network interface card and method can perform or include several traffic shaping mechanisms including, for example and without limitation, a delayed completion mechanism, a time-indexed data structure, a packet builder, and a memory manager.
Delay-based automatic queue management and tail drop
Approaches, techniques, and mechanisms are disclosed for improving operations of a network switching device and/or network-at-large by utilizing queue delay as a basis for measuring congestion for the purposes of Automated Queue Management (“AQM”) and/or other congestion-based policies. Queue delay is an exact or approximate measure of the amount of time a data unit waits at a network device as a consequence of queuing, such as the amount of time the data unit spends in an egress queue while the data unit is being buffered by a traffic manager. Queue delay may be used as a substitute for queue size in existing AQM, Weighted Random Early Detection (“WRED”), Tail Drop, Explicit Congestion Notification (“ECN”), reflection, and/or other congestion management or notification algorithms. Or, a congestion score calculated based on the queue delay and one or more other metrics, such as queue size, may be used as a substitute.
Time allocation for network transmission
Methods and systems for managing data transmissions are disclosed. An example method can comprise determining a plurality of time allocations for a time cycle. The plurality of time allocations can comprise a first time allocation which can be determined based on an information rate, a committed information rate, an excess information rate, an effective bandwidth rate, other factors, or a combination thereof. Data can be received from multiple sources into a buffer, for example, and can be processed within a time cycle if processing the data will not exceed the time allocation.
Methods and apparatus for isochronous data delivery within a network
Methods and apparatus for efficiently servicing isochronous streams (such as media data streams) associated with a network. In one embodiment, an Isochronous Cycle Manager (ICM), receives multiple independent streams of packets that include isochronous packets arriving according to different time bases (e.g., where each stream has a different time base). The packets are sorted by the ICM into a buffering mechanism according to their required presentation time. Additionally the ICM calculates a launch time for each packet. The NIC transmits the packets from the queue according to an access scheme, such as a time division multiplexed (TDM) scheme where each of a plurality of cycles is subdivided into time slots. During appropriate time slots, the NIC transmits the packets in chronological order, as read out of the buffering mechanism.