H04L47/56

Increasing QoS throughput and efficiency through lazy byte batching

Described embodiments improve the performance of a computer network via selectively forwarding packets to bypass quality of service (QoS) processing, avoiding processing delays during critical periods of high demand, increasing throughput and efficiency may be increased by sacrificing a small amount of QoS accuracy. QoS processing may be applied to a subset of packets of a flow or connection, referred to herein as “lazy” processing or lazy byte batching. Packets that bypass QoS processing may be immediately forwarded with the same QoS settings as packets of the flow for which QoS processing is applied, resulting in tremendous overhead savings with only minimal decline in accuracy.

Methods and systems for queue and pipeline latency metrology in network devices and smart NICs

Inbound packets can be received by a network device that determines a receive pipeline latency metric based on a plurality of receive pipeline residency times of the inbound packets and determines a receive queue latency metric based on a plurality of receive queue residency times of the inbound packets. The receive queue latency metric and the receive pipeline latency metric can be reported to a data collector. The network appliance may also receive a plurality of outbound packets on a transmit queue, determine a transmit queue latency metric based on the transmit queue residency times of the outbound packets, and determine a transmit pipeline latency metric based on the transmit pipeline residency times of the outbound packets. The outbound packets may be transmitted toward their destination. The transmit queue latency metric and the transmit pipeline latency metric can be reported to the data collector.

Selectively bypassing a routing queue in a routing device in a fifth generation (5G) or other next generation network

The technologies described herein are generally directed toward shedding processing loads associated with route updates. According to an embodiment, a system can comprise a processor and a memory that can enable operations facilitating performance of operations including facilitating receiving, from a second routing device via a network, a communication. The operations can further comprise, in response to a queueing delay being determined to be less than a threshold, queueing, in the queue, the communication for a third routing device selected according to a first selection process as being on a route to a destination routing device for the communication. Further, operations to, in response to the queueing delay of the queue being determined to be equal to or above the threshold, transmit the communication to a fourth routing device, with the fourth routing device being selected according to a second selection process different than the first selection process.

Selectively bypassing a routing queue in a routing device in a fifth generation (5G) or other next generation network

The technologies described herein are generally directed toward shedding processing loads associated with route updates. According to an embodiment, a system can comprise a processor and a memory that can enable operations facilitating performance of operations including facilitating receiving, from a second routing device via a network, a communication. The operations can further comprise, in response to a queueing delay being determined to be less than a threshold, queueing, in the queue, the communication for a third routing device selected according to a first selection process as being on a route to a destination routing device for the communication. Further, operations to, in response to the queueing delay of the queue being determined to be equal to or above the threshold, transmit the communication to a fourth routing device, with the fourth routing device being selected according to a second selection process different than the first selection process.

RADIO UNIT CASCADING IN RADIO ACCESS NETWORKS
20230217311 · 2023-07-06 ·

The described technology is generally directed towards radio unit cascading in radio access networks. Radio units (RUs) can be configured with processors adapted to support daisy chaining of multiple RUs, so that the multiple RUs can connect to one hardware interface at a distributed unit (DU). An RU processor for a given RU can be configured to receive downlink data, including downlink data for the given RU as well as downlink data for other downstream RUs. The RU processor can extract the downlink data for the given RU and forward the downlink data for other downstream RUs via a southbound interface. The RU processor can also be configured to receive uplink data from the other RUs, multiplex the received uplink data from the other RUs with uplink data from the given RU, and send the resulting multiplexed data towards the DU via a northbound interface.

RADIO UNIT CASCADING IN RADIO ACCESS NETWORKS
20230217311 · 2023-07-06 ·

The described technology is generally directed towards radio unit cascading in radio access networks. Radio units (RUs) can be configured with processors adapted to support daisy chaining of multiple RUs, so that the multiple RUs can connect to one hardware interface at a distributed unit (DU). An RU processor for a given RU can be configured to receive downlink data, including downlink data for the given RU as well as downlink data for other downstream RUs. The RU processor can extract the downlink data for the given RU and forward the downlink data for other downstream RUs via a southbound interface. The RU processor can also be configured to receive uplink data from the other RUs, multiplex the received uplink data from the other RUs with uplink data from the given RU, and send the resulting multiplexed data towards the DU via a northbound interface.

DATA TRANSMISSION DEVICE, MEDICAL IMAGING DEVICE AND METHOD FOR TRANSMITTING DATA PACKETS

In an example embodiment a data transmission device for transmitting data packets comprises at least one receive interface configured to receive data packets from a respective data source; a respective receive buffer configured to buffer the data packets received via the respective receive interface; a transfer device configured to transfer the data packets from the respective receive buffer to a transmit buffer, the transmit buffer selected for the respective data packet from a plurality of existing transmit buffers; and a respective transmit interface configured to transmit the data packets stored in the respective transmit buffer to a receiving device, wherein the transfer device is configured to transfer the respective data packet from the respective receive buffer into the selected transmit buffer only when an enable condition exists, the enable condition being based on a fill level of one of the transmit buffers other than the selected transmit buffer.

Accurate Time-Stamping of Outbound Packets

A network device includes a port, a transmission pipeline and a time-stamping circuit. The port is configured for connecting to a network. The transmission pipeline includes multiple pipeline stages and is configured to process packets and to send the packets to the network via the port. The time-stamping circuit is configured to temporarily suspend at least some processing of at least a given packet in the transmission pipeline, to verify whether a pipeline stage having a variable processing delay, located downstream from the time-stamping circuit, meets an emptiness condition, and, only when the pipeline stage meets the emptiness condition, to time-stamp the given packet and resume the processing of the given packet.

DATA VALIDITY BASED NETWORK BUFFER MANAGEMENT SYSTEM
20220393990 · 2022-12-08 ·

Systems and methods for data scheduling and queuing. A data network node is configured to transmit data in a store-and-forward fashion. The data network node includes a delay and validity determination module that determines and assigns a validity value to each data packet incoming via an input port based on a time stamp of the data packet, a current time value, an expected delay on a route of the data packet to its destination, and a packet urgency value. A scheduling module and a queue managing module execute their functions based on the validity value assigned to a data packet in a transmission buffer.

DATA VALIDITY BASED NETWORK BUFFER MANAGEMENT SYSTEM
20220393990 · 2022-12-08 ·

Systems and methods for data scheduling and queuing. A data network node is configured to transmit data in a store-and-forward fashion. The data network node includes a delay and validity determination module that determines and assigns a validity value to each data packet incoming via an input port based on a time stamp of the data packet, a current time value, an expected delay on a route of the data packet to its destination, and a packet urgency value. A scheduling module and a queue managing module execute their functions based on the validity value assigned to a data packet in a transmission buffer.