H04L49/9089

MEMORY ALLOCATOR FOR I/O OPERATIONS

Some embodiments provide a novel method for sharing data between user-space processes and kernel-space processes without copying the data. The method dedicates, by a driver of a network interface controller (NIC), a memory address space for a user-space process. The method allocates a virtual region of the memory address space for zero-copy operations. The method maps the virtual region to a memory address space of the kernel. The method allows access to the virtual region by both the user-space process and a kernel-space process.

DEVICE AND METHOD FOR PROCESSING DATA PACKET
20210297360 · 2021-09-23 ·

An electronic device, according to various embodiments of the present invention, comprises a network connection device, at least one processor, and a memory operatively connected to the at least one processor, wherein the memory stores instructions which, when executed, cause the at least one processor to: receive a data packet from the network connection device; add the data packet to a packet list corresponding to the data packet; and when the number of data packets included in the packet list is less than a threshold value, flush the data packets to a network stack on the basis of a flush time value for controlling a packet aggregation function, wherein the flush time value may be determined on the basis of the network throughput.

Flow Control Device and Method
20210051117 · 2021-02-18 ·

A flow control device includes an analysis unit identifying a flow of a received packet, a plurality of queues temporarily storing packets sorted according to each flow, an allocation information storage unit storing allocation information regarding a queue allocated for each flow, a sorting unit deciding a queue to be a storage destination of the received packet and sorts the packet based on a result identified by the analysis unit and the allocation information, a saved packet holding unit saving a packet belonging to a flow determined to have no allocation information regarding the queue to be allocated by the sorting unit, and a transmission unit transmitting the packet temporarily stored in the plurality of queues and the packet saved in the saved packet holding unit to a processing unit that processes a packet.

MAINTAINING BANDWIDTH UTILIZATION IN THE PRESENCE OF PACKET DROPS

Examples describe a manner of scheduling packet segment fetches at a rate that is based on one or more of: a packet drop indication, packet drop rate, incast level, operation of queues in SAF or VCT mode, or fabric congestion level. Headers of packets can be fetched faster than payload or body portions of packets and processed prior to queueing of all body portions. In the event a header is identified as droppable, fetching of the associated body portions can be halted and any body portion that is queued can be discarded. Fetch overspeed can be applied for packet headers or body portions associated with packet headers that are approved for egress.

METHODS AND APPARATUSES FOR PACKET SCHEDULING FOR SOFTWARE- DEFINED NETWORKING IN EDGE COMPUTING ENVIRONMENT

Provided are methods and apparatuses for packet scheduling for software-defined networking in an edge computing environment. A packet scheduling method according to an exemplary embodiment of the present disclosure comprises: receiving packets arriving at a queue connected to a switch in a software-defined network in an edge computing environment; moving the packets in the queue forward one position based on the order of arrival each time a packet is served by the switch; and if a new packet enters the switch while the buffer in the queue is full, pushing out the packet at the front and putting the new packet at the end of the queue.

Congestion drop decisions in packet queues
20210021545 · 2021-01-21 ·

A packet switch includes an ingress port; queue admission control circuitry connected to the ingress port; one or more egress queues configured to manage packet buffers; and an egress port connected to the packet buffers, wherein the packet buffers are managed such that already queued lower priority packets are discarded from the packet buffers when it is required to drop higher priority packets that should otherwise be accepted in the packet buffers. The queue admission control circuitry can be configured to determine if a packet should be dropped or not, and the queue admission control circuitry communicates to buffer reallocation circuitry that is configured to discard one or more lower priority packets to support enqueuing the higher priority packet.

SWITCHING AND LOAD BALANCING TECHNIQUES IN A COMMUNICATION NETWORK

A source access network device multicasts copies of a packet to multiple core switches, for switching to a same target access network device. The core switches are selected for the multicast based on a load balancing algorithm managed by a central controller. The target access network device receives at least one of the copies of the packet and generates at least metric indicative of a level of traffic congestion at the core switches and feeds back information regarding the recorded at least one metric to the controller. The controller adjusts the load balancing algorithm based on the fed back information for selection of core switches for a subsequent data flow.

Multi-destination traffic handling optimizations in a network device

When a measure of buffer space queued for garbage collection in a network device grows beyond a certain threshold, one or more actions are taken to decreasing an enqueue rate of certain classes of traffic, such as of multicast traffic, whose reception may have caused and/or be likely to exacerbate garbage-collection-related performance issues. When the amount of buffer space queued for garbage collection shrinks to an acceptable level, these one or more actions may be reversed. In an embodiment, to more optimally handle multi-destination traffic, queue admission control logic for high-priority multi-destination data units, such as mirrored traffic, may be performed for each destination of the data units prior to linking the data units to a replication queue. If a high-priority multi-destination data unit is admitted to any queue, the high-priority multi-destination data unit can no longer be dropped, and is linked to a replication queue for replication.

Flexible packet processing
10848430 · 2020-11-24 · ·

Various systems and methods for implementing a flexible packet processing mechanism are provided herein. A network interface device for implementing flexible packet processing includes a packet parser to: receive a packet; and determine from analyzing the packet, a corresponding processing element that is used to process the packet; and a coordinator circuit to: determine whether the processing element is active in a computing unit; load the processing element when it is not active; and forward the packet to the processing element.

System and Method of A High Buffered High Bandwidth Network Element
20200344167 · 2020-10-29 ·

A method and apparatus of a network element that processes a packet in the network element is described. In an exemplary embodiment, the network element receives a data packet that includes a destination address. The network element receives a packet, with a packet switch unit, wherein the packet was received by the network element on an ingress interface. The network element further determines if the packet is to be stored in an external queue. In addition, the network element identifies the external queue for the packet based on one or more characteristics of the packet. The network element additionally forwards the packet to a packet storage unit, wherein the packet storage unit includes storage for the external queue. Furthermore, the network element receives the packet from the packet storage unit and forwards the packet to an egress interface corresponding to the external queue.