Patent classifications
H04L47/30
Out-of-order packet handling in 5G/new radio
A user equipment (UE) can receive a first data stream and a second data stream; store data units of the second data stream, as stored data units, in a buffer while a retransmission operation is performed for the first data stream; determine that a threshold is satisfied with regard to the buffer, wherein the threshold is associated with a counter that is maintained based on the storing of the data units; and provide the stored data units based on determining that the threshold is satisfied.
Out-of-order packet handling in 5G/new radio
A user equipment (UE) can receive a first data stream and a second data stream; store data units of the second data stream, as stored data units, in a buffer while a retransmission operation is performed for the first data stream; determine that a threshold is satisfied with regard to the buffer, wherein the threshold is associated with a counter that is maintained based on the storing of the data units; and provide the stored data units based on determining that the threshold is satisfied.
Selective tracking of acknowledgments to improve network device buffer utilization and traffic shaping
Systems and methods provide for Selective Tracking of Acknowledgments (STACKing) to improve buffer utilization and traffic shaping for one or more network devices. A network device can identify a first flow that corresponds to a predetermined traffic class and a predetermined congestion state. The device can determine a current window size and congestion threshold of the first flow. In response to a determination to selectively track a portion of acknowledgments of the first flow, the device can track, in main memory, information of a first portion of acknowledgments of the first flow. The device can exclude, from one or more buffers, a second portion of acknowledgments of the first flow. The device can re-generate and transmit segments corresponding to the second portion of acknowledgments at a target transmission rate based on traffic shaping policies for the predetermined traffic class and congestion state.
Telemetry-Based Load-Balanced Fine-Grained Adaptive Routing in High-Performance System Interconnect
A switch is provided for routing packets in an interconnection network. The switch includes egress ports to transmit packets, and ingress ports to receive packets. The switch also includes a buffer capacity circuit configured to obtain local buffer capacity for buffers configured to buffer packets transmitted via the switch. The switch also includes a telemetry circuit configured to receive telemetry flow control units from next switches coupled to the switch. Each telemetry flow control unit corresponds to buffer capacity at a respective next switch. The switch also includes a network capacity circuit configured to compute network capacity for transmitting packets to a destination based on the telemetry flow control units and the local buffer capacity. The switch also includes a routing circuit configured to receive packets via the ingress ports, and route the packets to the destination, via the egress ports, with bandwidth proportional to the network capacity.
Accurate Time-Stamping of Outbound Packets
A network device includes a port, a transmission pipeline and a time-stamping circuit. The port is configured for connecting to a network. The transmission pipeline includes multiple pipeline stages and is configured to process packets and to send the packets to the network via the port. The time-stamping circuit is configured to temporarily suspend at least some processing of at least a given packet in the transmission pipeline, to verify whether a pipeline stage having a variable processing delay, located downstream from the time-stamping circuit, meets an emptiness condition, and, only when the pipeline stage meets the emptiness condition, to time-stamp the given packet and resume the processing of the given packet.
Reducing Decode Delay at a Client Device
Various implementations disclosed herein include devices, systems, and methods for reducing a decode delay at a client device. In some implementations, a device includes one or more processors and a non-transitory memory. In some implementations, a method includes determining that a client device is being switched from a real-time content presentation mode in which the client device presents real-time content to a buffered content presentation mode in which the client device presents buffered content. In some implementations, the method includes transmitting, to the client device, video frames corresponding to the buffered content at a first transmission rate. In some implementations, the method includes changing the first transmission rate to a second transmission rate based on an indication that a number of bits stored in a buffer of the client device satisfies a decode threshold.
DATA PROCESSING APPARATUS, DATA PROCESSING METHOD, AND DATA PROCESSING PROGRAM
A data processing apparatus (1) includes a signal processing unit (31, 32) and an insertion/deletion unit (33). The signal processing unit performs predetermined signal processing on the wirelessly received data for each frame that includes a predetermined number of samples, and stores the data in the buffers (41, 42). In the case where an amount of the data accumulated in the buffer is out of a predetermined range, the insertion/deletion unit (33) performs insertion/deletion processing that inserts or deletes the data in units of samples.
Communication System and Method
A communication system and method that may reduce the communication load and reduce the waste of system resources consumed for communication. Since a communication packet is generated using a packet structure most suitable for the data size and communication is performed through the generated communication packet, there is an advantage in that the communication load on the communication bus may be significantly reduced. In addition, since the information requesting device allocates a buffer corresponding to the communication packet and stores the communication packet in the allocated buffer, there is an advantage in that the buffer of the information requesting device may be prevented from being wasted.
Communication System and Method
A communication system and method that may reduce the communication load and reduce the waste of system resources consumed for communication. Since a communication packet is generated using a packet structure most suitable for the data size and communication is performed through the generated communication packet, there is an advantage in that the communication load on the communication bus may be significantly reduced. In addition, since the information requesting device allocates a buffer corresponding to the communication packet and stores the communication packet in the allocated buffer, there is an advantage in that the buffer of the information requesting device may be prevented from being wasted.
DYNAMIC PACKET DATA CONVERGENCE PROTOCOL REORDERING
A method of processing received Packet Data Convergence Protocol (PDCP) data packets in a PDCP layer module of a telecommunications base station, includes receiving by the PDCP layer module a plurality of data packets, determining by an analysis module of the PDCP layer module a proportion of the data packets received out of sequence over a predetermined number of received data packets, setting an expiry time of a reordering timer of a buffering and reordering module of the PDCP layer module according to the proportion, and starting the reordering timer upon receiving an out of sequence data packet in which the out of sequence data packet is added to a reordering buffer of the buffering and reordering module. If the reordering timer reaches the expiry time, data packets are removed from the reordering buffer and transferred from the PDCP layer module to another layer module of the base station.