H04L12/883

PACKET DESCRIPTOR STORAGE IN PACKET MEMORY WITH CACHE
20170353403 · 2017-12-07 ·

A first memory device stores (i) a head part of a FIFO queue structured as a linked list (LL) of LL elements arranged in an order in which the LL elements were added to the FIFO queue and (ii) a tail part of the FIFO queue. A second memory device stores a middle part of the FIFO queue, the middle part comprising a LL elements following, in an order, the head part and preceding, in the order, the tail part. A queue controller retrieves LL elements in the head part from the first memory device, moves LL elements in the middle part from the second memory device to the head part in the first memory device prior to the head part becoming empty, and updates LL parameters corresponding to the moved LL elements to indicate storage of the moved LL elements changing from the second memory device to the first memory device.

Method and apparatus for transmitting can frame
09794197 · 2017-10-17 · ·

The present invention relates to a method and apparatus for transmitting a CAN frame. A method for transmitting a CAN frame includes receiving an input of a transmission file containing a plurality of CAN frames; detecting the number of the CAN frames contained in the transmission file; comparing the number of the CAN frames with the number of transmission buffers; mapping, when the number of the CAN frames is less than or equal to the number of the transmission buffers, the CAN frames onto the transmission buffers in a one-to-one mapping manner; and mapping, when the number of the CAN frames is greater than the number of the transmission buffers, the CAN frames onto the transmission buffers in a many-to-one mapping manner.

SENDING PACKETS USING OPTIMIZED PIO WRITE SEQUENCES WITHOUT SFENCES AND OUT OF ORDER CREDIT RETURNS
20170249079 · 2017-08-31 ·

Methods and apparatus for sending packets using optimized PIO write sequences without sfences and out-of-order credit returns. Sequences of Programmed Input/Output (PIO) write instructions to write packet data to a PIO send memory are received by a processor in an original order and executed out of order, resulting in the packet data being written to send blocks in the PIO send memory out of order, while the packets themselves are stored in sequential order once all of the packet data is written. The packets are egressed out of order by egressing packet data contained in the send blocks to an egress block using a non-sequential packet order that is different than the sequential packet order. In conjunction with egressing the packets, corresponding credits are returned in the non-sequential packet order. A block list comprising a linked list and a free list are used to facilitate out-of-order packet egress and corresponding out-of-order credit returns.

Flow control device and method

A flow control device includes an analysis unit identifying a flow of a received packet, a plurality of queues temporarily storing packets sorted according to each flow, an allocation information storage unit storing allocation information regarding a queue allocated for each flow, a sorting unit deciding a queue to be a storage destination of the received packet and sorts the packet based on a result identified by the analysis unit and the allocation information, a saved packet holding unit saving a packet belonging to a flow determined to have no allocation information regarding the queue to be allocated by the sorting unit, and a transmission unit transmitting the packet temporarily stored in the plurality of queues and the packet saved in the saved packet holding unit to a processing unit that processes a packet.

MEMORY PAGE FAULT HANDLING FOR NETWORK INTERFACE DEVICES IN A VIRTUALIZED ENVIRONMENT
20210342272 · 2021-11-04 ·

Systems and methods for supporting memory page fault handling for network devices are disclosed. In one implementation, a processing device may receive, at a network interface device of a host computer system, an incoming packet from a network. The processing device may also select a first buffer from a plurality of buffers associated with a receiving queue of the network interface device. The processing device may attempt to store the incoming packet at the first buffer of the plurality of buffers. Responsive to receiving a notification that attempting to store the incoming packet at the first buffer encountered a page fault, the processing device may assign the first buffer to a wait queue of the network interface device. The processing device may further store the incoming packet at a second buffer of the plurality of buffers associated with the receiving queue.

Hybrid packet memory for buffering packets in network devices

A network device processes received packets at least to determine port or ports of the network device via which to transmit the packet. The network device also classifies the packets into packet flows, the packet flows being further categorized into traffic pattern categories characteristic of traffic pattern characteristics of the packet flows. The network device buffers, according to the traffic pattern categories of the packet flows, packets that belong to the packet flows in a first packet memory or in a second packet memory, the first packet memory having a memory access bandwidth different from a memory access bandwidth of the second packet memory. After processing the packets, the network device retrieves the packets from the first packet memory or the second packet memory in which the packets are buffered, and forwards the packets to the determined one or more ports for transmission of the packets.

Method and apparatus for using multiple linked memory lists

An apparatus and method for queuing data to a memory buffer. The method includes selecting a queue from a plurality of queues; receiving a token of data from the selected queue and requesting, by a queue module, addresses and pointers from a buffer manager for addresses allocated by the buffer manager for storing the token of data. Subsequently, a memory list is accessed by the buffer manager and addresses and pointers are generated to allocated addresses in the memory list which comprises a plurality of linked memory lists for additional address allocation. The method further includes writing into the accessed memory list the pointers for the allocated address where the pointers link together allocated addresses; and migrating to other memory lists for additional address allocations upon receipt of subsequent tokens of data from the queue; and generating additional pointers linking together the allocated addresses in the other memory lists.

Apparatus and method for processing data packet of electronic device

Disclosed is an electronic device including a wireless communication modem, at least one processor connected with the communication modem and comprising a plurality of cores, and a nonvolatile memory operatively connected with the processor, wherein the nonvolatile memory stores instructions that cause a first core of the processor to receive first data packets having a first size from the wireless communication modem, and to transmit at least a portion of the first data packets to a second core of the processor, and that cause the second core to receive the at least a portion of the first data packets from the first core, to merge the at least a portion of the first data packets into a plurality of second data packets having sizes larger than the first size, based at least in part on a type of the first data packets, and to transmit the second data packets to at least one other core of the processor than the first core and the second core.

Transmit power control

This disclosure describes systems, methods, and devices related to transmit power control (TPC). A device may identify a link measurement request frame from a first station device. The device may determine, for each transmit chain of the first station device, a TPC action to be performed by the first device. The device may cause to send a link measurement report frame comprising a value indicative of the TPC action for each transmit chain. The device may identify an acknowledgement from the first station device.

Reordering of data for parallel processing

A network interface device, including: an ingress interface; a host platform interface to communicatively couple to a host platform; and a packet preprocessor including logic to: receive via the ingress interface a data sequence including a plurality of discrete data units; identify the data sequence as data for a parallel processing operation; reorder the discrete data units into a reordered data frame, the reordered data frame configured to order the discrete data units for consumption by the parallel operation; and send the reordered data to the host platform via the host platform interface.