H04L47/62

Fair arbitration between multiple sources targeting a destination

A hardware module comprises at least a first ingress buffer and a second ingress buffer, where the second ingress buffer holds data packets from a plurality of source components. To ensure fairness between one or more sources providing data to the first ingress buffer and the plurality of sources providing data to the second ingress buffer, processing circuitry examines source identifiers in packets held in the second ingress buffer and selects between the buffers so as to arbitrate between the sources. In some embodiments, the examination of the source identifiers provides statistics for a weighted round robin between the ingress buffers. In other embodiments, the source identifier of whichever packet is currently at the head of the second ingress buffer is used to perform a simple round robin between the sources.

Out-of-order packet handling in 5G/new radio

A user equipment (UE) can receive a first data stream and a second data stream; store data units of the second data stream, as stored data units, in a buffer while a retransmission operation is performed for the first data stream; determine that a threshold is satisfied with regard to the buffer, wherein the threshold is associated with a counter that is maintained based on the storing of the data units; and provide the stored data units based on determining that the threshold is satisfied.

DATA PROCESSING METHOD, DATA PROCESSING APPARATUS, ELECTRONIC DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT
20220417169 · 2022-12-29 ·

Provided in the present disclosure are a data processing method and apparatus, and an electronic device, the method includes: determining a plurality of candidate data pieces, where the candidate data pieces are provided from corresponding data sources; and determining a target data piece based on priorities of the data sources corresponding to the plurality of candidate data pieces in a current cycle, wherein a same data source has different priorities in different processing cycles, and priority sequence numbers of a same data source in different processing cycles satisfy a nonlinear relationship.

AI ENGINE-SUPPORTING DOWNLINK RADIO RESOURCE SCHEDULING METHOD AND APPARATUS

An Artificial Intelligence (AI) engine-supporting downlink radio resource scheduling method and apparatus are provided. The AI engine-supporting downlink radio resource scheduling method includes: constructing an AI engine, establishing a Socket connection between an AI engine and an Open Air Interface (OAI) system, and configuring the AI engine into an OAI running environment to utilize the AI engine to replace a Round-Robin scheduling algorithm and a fair Round-Robin scheduling algorithm adopted by a Long Term Evolution (LTE) at a Media Access Control (MAC) layer in the OAI system for resource scheduling to take over a downlink radio resource scheduling process; sending scheduling information to the AI engine through Socket during the downlink radio resource scheduling process of the OAI system; and utilizing the AI engine to carry out resource allocation according to the scheduling information, and returning a resource allocation result to the OAI system.

Connection management in a network adapter

A network adapter includes a network interface, a host interface and processing circuitry. The network interface connects to a communication network for communicating with remote targets. The host interface connects to a host that accesses a Multi-Channel Send Queue (MCSQ) storing Work Requests (WRs) originating from client processes running on the host. The processing circuitry is configured to retrieve WRs from the MCSQ and distribute the WRs among multiple Send Queues (SQs) accessible by the processing circuitry, and retrieve WRs from the multiple NSQs and execute data transmission operations specified in the WRs retrieved from the multiple NSQs.

Inter-packet communication of machine learning information

A network switch includes one or more queues to hold packets received from a first input flow and a second input flow. The network switch also includes a packet communication switch configured to access a first header of a first packet in the one or more queues and a second header of a second packet in the one or more queues. The first header includes first machine learning (ML) information that represents a first set of state transition probabilities under a set of actions performed at the network switch. The second header includes second ML information that represents a second set of state transition probabilities under the set of actions performed at the network switch. The packet communication switch is configured to selectively modify the first header or the second header based on a comparison of the first ML information and the second ML information.

Inter-packet communication of machine learning information

A network switch includes one or more queues to hold packets received from a first input flow and a second input flow. The network switch also includes a packet communication switch configured to access a first header of a first packet in the one or more queues and a second header of a second packet in the one or more queues. The first header includes first machine learning (ML) information that represents a first set of state transition probabilities under a set of actions performed at the network switch. The second header includes second ML information that represents a second set of state transition probabilities under the set of actions performed at the network switch. The packet communication switch is configured to selectively modify the first header or the second header based on a comparison of the first ML information and the second ML information.

TIME INTERLEAVER, TIME DEINTERLEAVER, TIME INTERLEAVING METHOD, AND TIME DEINTERLEAVING METHOD
20220393991 · 2022-12-08 ·

A convolutional interleaver included in a time interleaver, which performs convolutional interleaving includes: a first switch that switches a connection destination of an input of the convolutional interleaver to one end of one of a plurality of branches; a FIFO memories provided in some of the plurality of branches except one branch, wherein a number of FIFO memories is different among the plurality of branches; and a second switch that switches a connection destination of an output of the convolutional interleaver to another end of one of the plurality of branches. The first and second switches switch the connection destination when the plurality of cells as many as the codewords per frame have passed, by switching a corresponding branch of the connection destination sequentially and repeatedly among the plurality of branches.

HARDWARE-IMPLEMENTED TABLES AND METHODS OF USING THE SAME FOR CLASSIFICATION AND COLLISION RESOLUTION OF DATA PACKETS
20220385593 · 2022-12-01 ·

Introduced here are approaches to classifying traffic that comprises data packets. For each data packet, a classification engine implemented on a computing device can identify an appropriate class from amongst multiple classes using a lookup table implemented in a memory. The memory could be, for example, static random-access memory (SRAM) as further discussed below. Moreover, the classification engine may associate an identifier with each data packet that specifies the class into which the data packet has been assigned. For example, each data packet could have an identifier appended thereto (e.g., in the form of metadata). Then, the data packets can be placed into queues based on the identifiers. Each queue may be associated with a different identifier (and thus a different class).

Incremental data processing

Incremental data processing at a computerized device includes determining a number of data sets from a plurality of data sets, each comprising values in at least two dimensions. The device accesses priority lists for a subset of the data sets. The priority lists specify data values for an ordered number of dimension value sets. Each priority list is sequentially processed to determine the specified data values for combinations of dimension values that apply to device requirements. Processing is aborted when a data value is determined for each combination of the dimension values that apply to the device requirements. A data value is selected among the determined data values. A number of data sets is determined based on the selected data values. A network route from a source device to a target device can be determined in this manner.