H04L47/6205

Hardware acceleration techniques using flow selection
11962518 · 2024-04-16 · ·

In some embodiments, a method receives a packet for a flow associated with a workload. Based on an indicator for the flow, the method determines whether the flow corresponds to one of an elephant flow or a mice flow. Only when the flow is determined to correspond to an elephant flow, the method enables a hardware acceleration operation on the packet. The hardware acceleration operation may include hardware operation offload, receive side scaling, and workload migration.

Apparatus and method for reordering data radio bearer packets

Aspects of the present disclosure provide various apparatuses and methods of reordering DRB flow packets using an in-band solution to reduce delay due to data packet buffering in head-of-line blocking scenarios across multiple DRB flows. When a packet of a flow is lost or not received, other flows carried in the same DRB can forward later received packets without waiting for the missing packet to be retransmitted and received.

Scheduling of packets in network devices

Network device for transmitting packets having packet properties, including at least two input-output-buffers for queuing packets in the network device; a sojourn time calculator for calculating a sojourn related time for each head packet in the at least two input-output-buffers; a sojourn related time adaptor for, based on an adaptation function assigned to the corresponding input-output-buffer, adapting the sojourn related time into an adapted time for each head packet in the at least two input-output-buffers; and a scheduler for scheduling outgoing packets based on the adapted time.

Virtual Channel Buffer Bypass
20240163222 · 2024-05-16 ·

A bypass path is provided in the node for reducing the latency and power consumption associated with writing to and reading from the VC buffer, and is enabled when certain conditions are met. Bypass is enabled for a received packet when there is no other data that is ready to be sent from the VC buffer, which is the case when all VCs either have zero credits or an empty partition in the buffer. In this way, data arriving at the node is prevented from using the bypass path to take priority over data already held in the VC buffer and ready for transmission.

METHOD AND APPARATUS FOR UPDATING PACKET PROCESSING RULE AT HIGH SPEED IN SDN NETWORK ENVIRONMENT
20240223507 · 2024-07-04 ·

The disclosure relates to a 5G or 6G communication system for supporting a higher data transmission rate. A method performed by a first network entity in a wireless communication system is provided. The method includes receiving, from a second network entity, a first message including information on at least one of a type, a size, or an identifier of an action buffer, which is for creating of the action buffer, receiving, from the second network entity, a second message including the identifier for adding of at least one action to multiple memories assigned to the action buffer created based on the first message, and performing an arbitrary action for an input packet based on an address of memory in which the arbitrary action among the at least one action is stored.

System and method for supporting a distributed data structure in a distributed data grid

A system and method can support a distributed data structure in a distributed data grid. The distributed data grid includes a plurality of buckets, wherein each said bucket is configured with a capacity to contain a number of elements in the distributed data structure. Furthermore, the distributed data grid includes a state owner process, which is configured to hold state information for the distributed data structure and provides the state information for the distributed data structure to a client process.

System and method for providing a distributed queue in a distributed data grid

A system and method can support a distributed queue in a distributed data grid. The distributed queue can include a queue of buckets stored on a plurality of processes, wherein each said bucket is configured to contain a number of elements of the distibuted queue. Furthermore, the distributed queue can include a named queue that holds a local version of a state information for the distributed queue, wherein said local version of the state information contains a head pointer and a tail pointer to the queue of buckets in the distributed data grid.

OTN transport over a leaf/spine packet network
12068964 · 2024-08-20 · ·

Systems and methods include receiving an Optical Transport Network (OTN) signal; segmenting the OTN signal into one or more flows of packets; and transmitting the one or more flows of packets spread over one or more Ethernet links. The one or more flows can be transmitted over a Leaf/Spine network, and the one or more flows can be elephant and/or mice flows.

Methods and apparatus for preventing head of line blocking for RTP over TCP
10129163 · 2018-11-13 · ·

Methods and apparatus for processing and using TCP packets to communicate RTP packets are described. Head of line blocking is avoided by operating a TCP packet processing module to output RTP packet data to an application irrespective of whether or not a preceding TCP packet was received. Since output of packet data to an application using RTP packets is not delayed when there is a missing TCP packet, head of line blocking is avoided. RTP packet data is subjected to pattern matching in order to identify and process RTP packets in the case where RTP header information such as packet length information is missing due to the failure to receive a TCP packet. The methods are particularly well suited for the communication of audio and/or video by devices operating behind firewalls which block UDP or other types of packets other than TCP packets.

GENERIC QUEUE

Apparatuses, methods and storage medium associated with the placement of data packets in one or more queues of a switch are described herein. In embodiments, the switch may include a plurality of virtual lane (VL) queues (VLQs) and a plurality of generic queues (GQs). A queue manager may be configured to selectively place a packet of a particular VL in a corresponding VLQ or a GQ. Other embodiments may be described and/or claimed.