H04L49/9031

Network device having flexible rate limiter

A network device for a communications network includes a port configured to transmit data to the network at a maximum transmit data rate. The device also includes a transmit buffer configured to buffer data units that are ready for transmission to the network, and a packet buffer configured to buffer data units before the data units are ready for transmission. The packet buffer is configured to output data units at a maximum packet buffer transmission rate faster than the maximum transmit data rate. The device includes a rate controller configured to control a transmission rate of data from the packet buffer to the transmit buffer so that averaged over a period, the transmission rate from the packet buffer to the transmit buffer is at most equal to the maximum transmit data rate, while allowing the transmission rate, at one or more time intervals, to exceed the maximum transmit data rate.

CIRCULAR QUEUE MANAGEMENT WITH SPLIT INDEXES

Methods and apparatus for managing circular queues are disclosed. A pointer designates an index position of a particular queue element and contains an additional pointer state, whereby two pointer values (split indexes) can designate the same index position. Front and rear pointers are respectively managed by dequeue and enqueue logic. The front pointer state and rear pointer state distinguish full and empty queue states when both pointers designate the same index position. Asynchronous dequeue and enqueue operations are supported, no lock is required, no queue entry is wasted. Hardware and software embodiments for numerous applications are disclosed.

Transfer apparatus, transfer method, and program for transporting data from a single source to sinks with different communication requirements

In one aspect of the present invention, a transfer apparatus includes a reception unit configured to receive a packet from a source which distributes data according to a transmission control protocol (TCP); a storage unit configured to store data included in the received packet in a buffer based on a TCP sequence number of the received packet; a TCP transfer unit configured to transfer the received packet to a first sink which requests distribution according to the TCP; and a UDP transfer unit configured to read the data from the buffer and transfer the read data to a second sink which requests distribution according to a user datagram protocol (UDP).

METHOD AND APPARATUS FOR MANAGING BUFFERING OF DATA PACKET OF NETWORK CARD, TERMINAL AND STORAGE MEDIUM
20230291696 · 2023-09-14 ·

A method and apparatus for managing buffering of data packets of a network card, a terminal and a storage medium are provided. The method includes: setting ring buffer queues, setting a length of each ring buffer queue according to a size of a total buffer space and the number of threads of an upper-layer application, then setting a buffer pool formed by two ring buffer queues, and setting the two ring buffer queues in the buffer pool as a busy queue and an idle queue, respectively; a network card driver receiving data packets from a data link, classifying the data packets, sequentially buffering the classified data packets into the busy queue by using a write pointer of the busy queue, and then sequentially mapping addresses of the buffered data packets in the busy queue into the idle queue; acquiring latest addresses of the buffered data packets in the busy queue by using a read pointer of the idle queue; and the upper-layer application successively acquiring and processing the buffered data packets by using a read pointer of the busy queue, and successively releasing the addresses of the processed buffered data packets in the busy queue by using a write pointer of the idle queue.

Method and apparatus for managing buffering of data packet of network card, terminal and storage medium

A method and apparatus for managing buffering of data packets of a network card, a terminal and a storage medium are provided. The method includes: setting ring buffer queues, setting a length of each ring buffer queue, then setting a buffer pool formed by two ring buffer queues, and setting the two ring buffer queues in the buffer pool as a busy queue and an idle queue, respectively; a network card driver receiving data packets from a data link, classifying the data packets, sequentially buffering the classified data packets into the busy queue, and then sequentially mapping addresses of the buffered data packets in the busy queue into the idle queue; acquiring latest addresses of the buffered data packets in the busy queue; and the upper-layer application successively acquiring and processing the buffered data packets, and successively releasing the addresses of the processed buffered data packets in the busy queue.

Scalable traffic management using one or more processor cores for multiple levels of quality of service

Packets are differentiated based on their traffic class. A traffic class is allocated bandwidth for transmission. One or more core or thread can be allocated to process packets of a traffic class for transmission based on allocated bandwidth for that traffic class. If multiple traffic classes are allocated bandwidth, and a traffic class underutilizes allocated bandwidth or a traffic class is allocated insufficient bandwidth, then allocated bandwidth can be adjusted for a future transmission time slot. For example, a higher priority traffic class with excess bandwidth can share the excess bandwidth with a next highest priority traffic class for use to allocate packets for transmission for the same time slot. In the same or another example, bandwidth allocated to a traffic class depends on an extent of insufficient allocation or underutilization of allocated bandwidth such that a traffic class with insufficient allocated bandwidth in one or more prior time slot can be provided more bandwidth in a current time slot and a traffic class with underutilization of allocated bandwidth can be provided with less allocated bandwidth for a current time slot.

Transport protocol and interface for efficient data transfer over RDMA fabric

Described herein is a system and method for utilizing a protocol over RDMA network fabric between a first computing node and a second computing node. The protocol identifies a first threshold and a second threshold. A transfer request is received, and, a data size associated with the transfer request is determined. Based up the data size associated with the transfer request, one of at least three transfer modes is selected to perform the transfer request in accordance with the first threshold and the second threshold. Each transfer mode utilizes flow control and at least one RDMA operation. The selected transfer mode is utilized to perform the transfer request.

SYSTEM FOR CONTROLLING DATA FLOW BETWEEN MULTIPLE PROCESSORS
20220100633 · 2022-03-31 ·

First and second processors that are in communication with each other are disclosed. The first processor includes a sampling controller, a sampling circuit, and a data flow controller. The sampling controller is configured to receive multiple identifiers and corresponding enable signals associated with data that is to be transmitted to or received from the second processor, and generate an identification signal and a sampling signal based on one of the identifiers and the corresponding enable signal. The sampling circuit is configured to sample multiple data counts to generate corresponding sampled counts based on the identification signal and the sampling signal. The data flow controller is configured to generate a control signal based on the identifiers, the corresponding enable signals, the data counts, and the corresponding sampled counts to control data flow between the first and second processors.

REMOTE DIRECT MEMORY ACCESS BASED NETWORKING GATEWAY
20210240645 · 2021-08-05 ·

A system includes a memory including a plurality of rings, an endpoint associated with a ring of the plurality of rings, and a gateway. The gateway is configured to receive a notification from the endpoint regarding a packet made available in the ring associated with the endpoint, access the ring with an RDMA read request, retrieve the packet made available in the ring, and forward the packet on an external network.

Systems and methods for providing lockless bimodal queues for selective packet capture

In a network system, an application receiving packets can consume one or more packets in two or more stages, where the second and the later stages can selectively consume some but not all of the packets consumed by the preceding stage. Packets are transferred between two consecutive stages, called producer and consumer, via a fixed-size storage. Both the producer and the consumer can access the storage without locking it and, to facilitate selective consumption of the packets by the consumer, the consumer can transition between awake and sleep modes, where the packets are consumed in the awake mode only. The producer may also switch between awake and sleep modes. Lockless access is made possible by controlling the operation of the storage by the producer and the consumer both according to the mode of the consumer, which is communicated via a shared memory location.