H04L47/622

Techniques to facilitate a hardware based table lookup

Techniques to facilitate a hardware based table look of a table maintained in or more types of memories or memory domains include examples of receiving a search request forwarded from a queue management device. Examples also include implementing table lookups to obtain a result and sending the result to an output queue of the queue management device for the queue management device to forward the result to a requestor of the search request.

METHOD AND APPARATUS FOR PROCESSING DATA PACKETS, DEVICE, AND STORAGE MEDIUM
20210399993 · 2021-12-23 · ·

Provided are method and apparatus for processing data packets, a device, and a storage medium that relate to the field of communications. The method includes: receiving multiple data packets of an identical service transmitted in multiple frequency bands, where each of the data packets carries arrangement indication information; and sorting the data packets based on the arrangement indication information carried in each of the data packets.

PREEMPTIVE PACKET TRANSMISSION
20210377180 · 2021-12-02 ·

Disclosed herein is technology to reduce latency of frames through a network device supporting various priorities. In an implementation, a method comprises configuring one or more priorities with a preemptive right over other one or more of said plurality of priorities; receiving frames in a sequence, each of the frames having a frame priority comprising of one of said plurality of priorities; queuing the received frames in a predetermined order based on a frame arrival time and the frame priority; transmitting a current frame based on a current frame priority and current frame arrival time; stopping transmission of the current frame when a later frame in the sequence is received that has a later frame priority with preemptive right over the current frame priority; transmitting an invalid frame check sequence; transmitting the later frame; and restarting the transmission of the current frame after transmitting the later frame.

DYNAMIC RESOURCE ALLOCATION AIDED BY REINFORCEMENT LEARNING
20220200932 · 2022-06-23 · ·

A communication system in which DRA control is aided by RL. An example embodiment may control one or more buffer queues populated by downstream and/or upstream data streams. The egress rates of the buffer queues can be dynamically controlled using an RL technique, according to which a learning agent can adaptively change the state-to-action mapping function of the DRA controller while circumventing the RL exploration phase and relying on extrapolation of the already taken actions instead. This feature may result in at least two benefits: (i) cancellation of a performance penalty typically associated with RL exploration; and (ii) faster learning of the environment, as the learning agent can determine the performance metrics of many actions per state in a single occurrence of the state. In an example embodiment, the communication system may be a DSL system, a PON system, or a wireless communication system.

MESSAGE ORDERING BUFFER

The disclosed embodiments, collectively referred to as the “Message Ordering Buffer” or “MOB”, relate to an improved messaging platform, or processing system, which may also be referred to as a message processing architecture or platform, which routes messages from a publisher to a subscriber ensuring related messages, e.g., ordered messages, are conveyed to a single recipient, e.g., processing thread, without unnecessarily committing resources of the architecture to that recipient or otherwise preventing message transmission to other recipients. The disclosed embodiments further include additional features which improve efficient and facilitate deployment in different application environments. The disclosed embodiments may be deployed as a message oriented middleware component directly installed, or accessed as a service, and accessed by publishers and subscribers, as described herein, so as to electronically exchange messages therebetween.

Spraying for unequal link connections in an internal switch fabric

In general, techniques are described for facilitating balanced cell handling by fabric cores of a fabric plane for an internal device switch fabric. In some examples, a routing system includes a plurality of fabric endpoints and a switching fabric comprising a fabric plane to switch cells among the fabric endpoints. The fabric plane includes two fabric cores and one or more inter-core links connecting the fabric cores. Each fabric core selects an output port of the fabric core to which to route a received cell of the cells based on (i) an input port of the fabric core on which the received cell was received and (ii) a destination fabric endpoint for the received cell, at least a portion of the selected output ports being connected to the inter-core links, and switches the received cell to the selected output port.

Message ordering buffer

The disclosed embodiments, collectively referred to as the “Message Ordering Buffer” or “MOB”, relate to an improved messaging platform, or processing system, which may also be referred to as a message processing architecture or platform, which routes messages from a publisher to a subscriber ensuring related messages, e.g., ordered messages, are conveyed to a single recipient, e.g., processing thread, without unnecessarily committing resources of the architecture to that recipient or otherwise preventing message transmission to other recipients. The disclosed embodiments further include additional features which improve efficient and facilitate deployment in different application environments. The disclosed embodiments may be deployed as a message oriented middleware component directly installed, or accessed as a service, and accessed by publishers and subscribers, as described herein, so as to electronically exchange messages therebetween.

Data scheduling method and switching device

Embodiments of this application provide a method, includes: receiving a first data flow that includes a plurality of data units; inputting N.sub.1 data units of the plurality of data units and a first source marking unit into the first source queue; inputting M.sub.1 data units of the plurality of data units and a first target marking unit into the first target queue, wherein the N.sub.1 data units and the M.sub.1 data units are different data units; scheduling the N.sub.1 data units and the first source marking unit based on the first source marking unit and the first target marking unit; and scheduling the first target marking unit and the M.sub.1 data units, wherein the first target marking unit and the M.sub.1 data units are scheduled later than the N.sub.1 data units and the first source marking unit are scheduled.

Cyclic Queuing and Forwarding (CQF) Segmentation
20230328002 · 2023-10-12 ·

A method for communicating time sensitive data streams in a network. The method includes synchronizing queuing and transmission of data in a set of output buffers for buffering streams associated with a certain class of service. The method includes processing data packets of a stream using a cyclic flow meter, wherein the cyclic flow meter limits transfer of the data packets of the stream to the set of output buffers of the network node according to a predetermined amount per cycle based on an output frequency. The method includes transmitting, according to the output frequency, the data packets of the stream from a non-empty output buffer of the set of output buffers, and transferring the data packets of the stream from the cyclic flow meter to an empty output buffer of the set of output buffers.

Rate Limited Scheduler For Solicited Data Transfers

A flow rate control method for solicited data communications includes receiving, at a first node of a communications network, a request-to-send (RTS) signal from a second node of the communications network, the RTS signal indicating a size of a solicited data transmission of the second node, determining, by the first node, whether a rate-limiting counter is above zero, wherein the rate-limiting counter is programmed to increase at a programmed rate and in response to the rate-limiting counter being above zero, scheduling, by the first node, a clear-to-send (CTS) signal to be sent from the first node to the second node over the communications network, and subtracting, by the first node, a value corresponding to the size of the solicited data transmission of the second node from the rate-limiting counter.