H04L47/6255

Dynamic resource allocation aided by reinforcement learning

A communication system in which DRA control is aided by RL. An example embodiment may control one or more buffer queues populated by downstream and/or upstream data streams. The egress rates of the buffer queues can be dynamically controlled using an RL technique, according to which a learning agent can adaptively change the state-to-action mapping function of the DRA controller while circumventing the RL exploration phase and relying on extrapolation of the already taken actions instead. This feature may result in at least two benefits: (i) cancellation of a performance penalty typically associated with RL exploration; and (ii) faster learning of the environment, as the learning agent can determine the performance metrics of many actions per state in a single occurrence of the state. In an example embodiment, the communication system may be a DSL system, a PON system, or a wireless communication system.

Time interleaver, time deinterleaver, time interleaving method, and time deinterleaving method
11611515 · 2023-03-21 · ·

A convolutional interleaver included in a time interleaver, which performs convolutional interleaving includes: a first switch that switches a connection destination of an input of the convolutional interleaver to one end of one of a plurality of branches; a FIFO memories provided in some of the plurality of branches except one branch, wherein a number of FIFO memories is different among the plurality of branches; and a second switch that switches a connection destination of an output of the convolutional interleaver to another end of one of the plurality of branches. The first and second switches switch the connection destination when the plurality of cells as many as the codewords per frame have passed, by switching a corresponding branch of the connection destination sequentially and repeatedly among the plurality of branches.

Auto load balancing

Automatic load-balancing techniques in a network device are used to select, from a multipath group, a path to assign to a flow based on observed state attributes such as path state(s), device state(s), port state(s), or queue state(s) of the paths. A mapping of the path previously assigned to a flow or group of flows (e.g., on account of having then been optimal in view of the observed state attributes) is maintained, for example, in a table. So long as the flow(s) are active and the path is still valid, the mapped path is selected for subsequent data units belonging to the flow(s), which may, among other effects, avoid or reduce packet re-ordering. However, if the flow(s) go idle, or if the mapped path fails, a new optimal path may be assigned to the flow(s) from the multipath group.

Technologies for selecting non-minimal paths and throttling port speeds to increase throughput in a network

Technologies for improving throughput in a network include a node switch. The node switch is to obtain expected performance data indicative of an expected data transfer performance of the node switch. The node switch is also to obtain measured performance data indicative of a measured data transfer performance of the node switch, compare the measured performance data to the expected performance data to determine whether the measured data transfer performance satisfies the expected data transfer performance, determine, as a function of whether the measured data transfer performance satisfies the expected data transfer performance, whether to force a unit of data through a non-minimal path to a destination, and send, in response to a determination to force the unit of data to be sent through a non-minimal path, the unit of data to an output port of the node switch associated with the non-minimal path. Other embodiments are also described.

Method and apparatus for order entry in an electronic trading system

Orders received by an electronic trading system are processed in batches based on the instrument to which an order relates. An incoming order is assigned to a queue of a queue set that makes up the batch according to a random process. Where orders are received from related trading parties, they are assigned to the same queue set according to their time of receipt. The batch has a random duration within defined minimum and maximum durations and at the end of the batch, the orders held in the queues are transferred to a matching thread of the trading system sequentially with one order being removed from each queue and a number of passes of the queues completed until orders have been removed.

Multi-destination traffic handling optimizations in a network device

When a measure of buffer space queued for garbage collection in a network device grows beyond a certain threshold, one or more actions are taken to decreasing an enqueue rate of certain classes of traffic, such as of multicast traffic, whose reception may have caused and/or be likely to exacerbate garbage-collection-related performance issues. When the amount of buffer space queued for garbage collection shrinks to an acceptable level, these one or more actions may be reversed. In an embodiment, to more optimally handle multi-destination traffic, queue admission control logic for high-priority multi-destination data units, such as mirrored traffic, may be performed for each destination of the data units prior to linking the data units to a replication queue. If a high-priority multi-destination data unit is admitted to any queue, the high-priority multi-destination data unit can no longer be dropped, and is linked to a replication queue for replication.

Method and Apparatus for Queue Scheduling
20230117851 · 2023-04-20 ·

Embodiments of this application disclose a method and an apparatus for queue scheduling, to reduce a network latency in a packet transmission process. The method includes: A first device obtains a first packet balance when scheduling a first queue, where the first packet balance indicates a volume of packets that can be dequeued from the first queue; and the first device schedules a second queue based on the first packet balance.

Low Latency Queuing System
20230164088 · 2023-05-25 ·

Disclosed herein are methods and apparatuses for processing network traffic by a queuing system which may include: receiving pointers to chunks of memory allocated responsive to receipt of network traffic, the chunks of memory each including a portion of a queue batch, wherein the queue batch includes a plurality of queue requests; generating a data structure including the pointers and a reference count; assigning the queue request to a second core; generating a first structured message for the first queue request; and storing the first structured message in a structured message passing queue associated with the second core, wherein a second processing thread associated with the second core, responsive to receiving the structured message, processes the first queue request by retrieving the first queue request from at least one of the chunks of memory.

Congestion control method and network device
11652752 · 2023-05-16 · ·

A network device adds a fixed value to a congestion threshold (CT) when a first period ends. Detects whether a difference obtained by subtracting average traffic load of a queue in the first period from average traffic load of the queue in a second period is greater than a target increase value, sets the CT based on a detection result when the second period ends, where the first period is previous to the second period; marks a received packet when a quantity of packets buffered in the queue is greater than the CT, enqueues the marked packet and sends the marked packet to a receiving device.

Fair Arbitration Between Multiple Sources Targeting a Destination
20230144797 · 2023-05-11 ·

A hardware module comprises at least a first ingress buffer and a second ingress buffer, where the second ingress buffer holds data packets from a plurality of source components. To ensure fairness between one or more sources providing data to the first ingress buffer and the plurality of sources providing data to the second ingress buffer, processing circuitry examines source identifiers in packets held in the second ingress buffer and selects between the buffers so as to arbitrate between the sources. In some embodiments, the examination of the source identifiers provides statistics for a weighted round robin between the ingress buffers. In other embodiments, the source identifier of whichever packet is currently at the head of the second ingress buffer is used to perform a simple round robin between the sources.