H04L47/62

Method for measuring a transmission delay with control of degrees of contention applied to a data frame

The invention relates to a method for transmitting a target data frame (fA) on a path comprising at least one router (R) that has input ports (P1, P2, P3), at least one output port (PS) and an arbitration unit (UA) configured so as to select a data frame from a plurality of data frames each coming from a different input port and competing for transmission by one and the same output port. The method comprises specifying, for each of the access ports of the router, data frames (fB, fC) competing with the target data frame for transmission by a target output port of the router. An end-to-end transmission time of the target data frame on the path is then measured while the arbitration unit selects the competing data frame (fB) before the target data frame (fA) for transmission by the target output port (PS).

Expandable Queue
20230010161 · 2023-01-12 ·

A network device includes packet processing circuitry and queue management circuitry. The packet processing circuitry is configured to transmit and receive packets to and from a network. The queue management circuitry is configured to store, in a memory, a queue for queuing data relating to processing of the packets, the queue including a primary buffer and an overflow buffer, to choose between a normal mode and an overflow mode based on a defined condition, to queue the data only in the primary buffer when operating in the normal mode, and, when operating in the overflow mode, to queue the data in a concatenation of the primary buffer and the overflow buffer.

TIME INTERLEAVER, TIME DEINTERLEAVER, TIME INTERLEAVING METHOD, AND TIME DEINTERLEAVING METHOD
20230216807 · 2023-07-06 ·

A convolutional interleaver included in a time interleaver, which performs convolutional interleaving includes: a first switch that switches a connection destination of an input of the convolutional interleaver to one end of one of a plurality of branches; a FIFO memories provided in some of the plurality of branches except one branch, wherein a number of FIFO memories is different among the plurality of branches; and a second switch that switches a connection destination of an output of the convolutional interleaver to another end of one of the plurality of branches. The first and second switches switch the connection destination when the plurality of cells as many as the codewords per frame have passed, by switching a corresponding branch of the connection destination sequentially and repeatedly among the plurality of branches.

Method of Managing Data Transmission for Ensuring Per-Flow Fair Bandwidth Sharing
20230216805 · 2023-07-06 ·

A computer-implementation method includes receiving a data packet; identifying a virtual queue from a list of virtual queues to which the data packet pertains; and determining whether the identified virtual queue size exceeds a threshold maximum size. When the first size does not exceed the threshold maximum size, the identified virtual queue is increased based on a size of the data packet and the data packet is forwarded. The method further includes setting a virtual queue from the list of virtual queues as a target queue; determining a service capacity based on an update time interval and increasing a credit allowance based on the service capacity. The target queue is reduced by an amount based on the credit allowance size, and the credit allowance is reduced by the same amount.

MESSAGE ORDERING BUFFER

The disclosed embodiments, collectively referred to as the “Message Ordering Buffer” or “MOB”, relate to an improved messaging platform, or processing system, which may also be referred to as a message processing architecture or platform, which routes messages from a publisher to a subscriber ensuring related messages, e.g., ordered messages, are conveyed to a single recipient, e.g., processing thread, without unnecessarily committing resources of the architecture to that recipient or otherwise preventing message transmission to other recipients. The disclosed embodiments further include additional features which improve efficient and facilitate deployment in different application environments. The disclosed embodiments may be deployed as a message oriented middleware component directly installed, or accessed as a service, and accessed by publishers and subscribers, as described herein, so as to electronically exchange messages therebetween.

Messaging system failover

A device receives a notification indicating a failure of a first server device responsible for a primary message queue that includes messages at a time of the failure. A second server device is responsible for a standby message queue to which the messages are replicated, where a position in the standby message queue and a message time are assigned to each of the replicated messages. The device obtains a record time that identifies the message time of one of the messages that was last obtained from the primary message queue prior to the failure, compares an adjusted record time and the message time of one or more of the messages of the standby message queue to determine a starting position in the standby message queue, and processes messages obtained from the standby message queue beginning at one of the messages assigned to the position that matches the starting position.

NETWORK NODE SIMULATION METHOD BASED ON LINUX CONTAINER

A large-scale network node simulation method based on Linux container is provided, which solves problems of low packet transmission efficiency and multi-thread creation in real-time simulation in a large-scale network scenario. The method includes: scheduling all container nodes in a scenario; managing, by a container node, a dynamic thread through an idle thread management queue, and setting a finite state machine and a function pointer for the dynamic thread; registering, by a source container node, an output queue with a next-hop container node, and informing the next-hop container node to allocate a dynamic thread for receiving and processing the output queue. Packet transmission is realized between the container nodes through data units created in a shared memory. The sending thread and the receiving thread dynamically adjust the number of dynamic threads by checking the state of the output queue.

NETWORK NODE SIMULATION METHOD BASED ON LINUX CONTAINER

A large-scale network node simulation method based on Linux container is provided, which solves problems of low packet transmission efficiency and multi-thread creation in real-time simulation in a large-scale network scenario. The method includes: scheduling all container nodes in a scenario; managing, by a container node, a dynamic thread through an idle thread management queue, and setting a finite state machine and a function pointer for the dynamic thread; registering, by a source container node, an output queue with a next-hop container node, and informing the next-hop container node to allocate a dynamic thread for receiving and processing the output queue. Packet transmission is realized between the container nodes through data units created in a shared memory. The sending thread and the receiving thread dynamically adjust the number of dynamic threads by checking the state of the output queue.

RADIO UNIT CASCADING IN RADIO ACCESS NETWORKS
20230217311 · 2023-07-06 ·

The described technology is generally directed towards radio unit cascading in radio access networks. Radio units (RUs) can be configured with processors adapted to support daisy chaining of multiple RUs, so that the multiple RUs can connect to one hardware interface at a distributed unit (DU). An RU processor for a given RU can be configured to receive downlink data, including downlink data for the given RU as well as downlink data for other downstream RUs. The RU processor can extract the downlink data for the given RU and forward the downlink data for other downstream RUs via a southbound interface. The RU processor can also be configured to receive uplink data from the other RUs, multiplex the received uplink data from the other RUs with uplink data from the given RU, and send the resulting multiplexed data towards the DU via a northbound interface.

Managing virtual output queues

A first node of a packet switched network transmits at least one flow of protocol data units of a network to at least one output context of one of a plurality of second nodes of the network. The first node includes X virtual output queues (VOQs). The first node receives, from at least one of the second nodes, at least one fair rate record. Each fair rate record corresponds to a particular second node output context and describes a recommended rate of flow to the particular output context. The first node allocates up to X of the VOQs among flows corresponding to i) currently allocated VOQs, and ii) the flows corresponding to the received fair rate records. The first node operates each allocated VOQ according to the corresponding recommended rate of flow until a deallocation condition obtains for the each allocated VOQ.