H04L49/9026

HIGHLY PARALLEL PROGRAMMABLE PACKET EDITING ENGINE WITH A SCALABLE CONTROL INFRASTRUCTURE
20190379767 · 2019-12-12 ·

A highly parallel programmable packet editing engine with a scalable control infrastructure includes receiving an ingress packet having one or more headers; assigning, by one or more processors, the one or more headers of the ingress packet to a number of zones, wherein each zone is a grouping of adjacent headers that are closely related to one another by information content or processing type; performing, by the one or more processors, offset computations for the one or more headers in a zone concurrently with offset computations of headers assigned to other zones; performing, by the one or more processors, different header operations on the one or more headers concurrently with respective ones of a plurality of editing engines; combining, by the one or more processors, the edited one or more headers at the computed offsets to generate a modified egress packet; and providing, for transmission, the modified egress packet.

Systems and methods for propagating metadata of in-flight packets within kernel space

The disclosed computer-implemented method may include (1) identifying, in kernel space on a network device, a packet that is destined for a remote device, (2) passing, along with the packet, metadata for the packet to a packet buffer in kernel space on the network device, (3) framing, by the kernel module in kernel space, the packet such that the packet egresses via a tunnel interface driver on the network device, (4) encapsulating, by the tunnel interface driver, the packet with the metadata, and then (5) forwarding, by the tunnel interface driver, the packet to the remote device based at least in part on the metadata with which the packet was encapsulated. Various other methods, systems, and computer-readable media are also disclosed.

PACKET DESCRIPTOR STORAGE IN PACKET MEMORY WITH CACHE
20190173809 · 2019-06-06 ·

A first memory device stores (i) a head part of a FIFO queue structured as a linked list (LL) of LL elements arranged in an order in which the LL elements were added to the FIFO queue and (ii) a tail part of the FIFO queue. A second memory device stores a middle part of the FIFO queue, the middle part comprising a LL elements following, in an order, the head part and preceding, in the order, the tail part. A queue controller retrieves LL elements in the head part from the first memory device, moves LL elements in the middle part from the second memory device to the head part in the first memory device prior to the head part becoming empty, and updates LL parameters corresponding to the moved LL elements to indicate storage of the moved LL elements changing from the second memory device to the first memory device.

PACKET PROCESSING OF STREAMING CONTENT IN A COMMUNICATIONS NETWORK

Aspects of present disclosure include devices within a transmission path of streamed content forwarding received data packets of the stream to the next device or hop in the path prior to buffering the data packet at the device. In this method, typical buffering of the data stream may therefore occur at the destination device for presentation at a consuming device, while the devices along the transmission path may transmit a received packet before buffering. Further, devices along the path may also buffer the content stream after forwarding to fill subsequent requests for dropped data packets of the content stream. Also, in response to receiving the request for the content stream, a device may first transmit a portion of the contents of the gateway buffer to the requesting device to fill a respective buffer at the receiving device.

WORK UNIT STACK DATA STRUCTURES IN MULTIPLE CORE PROCESSOR SYSTEM FOR STREAM DATA PROCESSING

Techniques are described in which a device, such as a network device, compute node or storage device, is configured to utilize a work unit (WU) stack data structure in a multiple core processor system to help manage an event driven, run-to-completion programming model of an operating system executed by the multiple core processor system. The techniques may be particularly useful when processing streams of data at high rates. The WU stack may be viewed as a stack of continuation work units used to supplement a typical program stack as an efficient means of moving the program stack between cores. The work unit data structure itself is a building block in the WU stack to compose a processing pipeline and services execution. The WU stack structure carries state, memory, and other information in auxiliary variables.

Systems and methods for serial input and selective output mechanism for exchanging data at a network device

Embodiments herein use a single buffer that comprises a plurality of serially connected data cells to serially store data attributes and the respective data source identifiers from incoming data requests such that each stored data source identifier is used to match with a response message that corresponds to a respective data request. When a response message is received at the data interface, the data interface searches among the previously stored data attributes at the single buffer and selectively outputs a previously stored data attribute that corresponds to a data request to match with the response message. The data interface then uses information from the previously stored data attribute to route the response message to the data source that originates the data request.

QUEUE MANAGEMENT METHOD AND APPARATUS

A queue management method and apparatus are disclosed. The queue management method includes: storing a first packet to a first buffer cell included in a first macrocell, where the first macrocell is enqueued to a first entity queue, the first macrocell includes N consecutive buffer cells, and the first buffer cell belongs to the N buffer cells; correcting, based on a packet length of the first packet, an average packet length in the first macrocell that is obtained before the first packet is stored, to obtain a current average packet length in the first macrocell; and generating, based on the first macrocell and the first entity queue, queue information corresponding to the first macrocell of the first macrocell in the first entity queue, a head pointer in the first macrocell, a tail pointer in the first macrocell, and the current average packet length in the first macrocell.

Audio data processing
10212094 · 2019-02-19 · ·

An apparatus and method are provided. A first buffer is configured to store a first packet stream, the first buffer comprising a first read pointer pointing to a first position in the first packet stream. A second buffer is configured to store a second packet stream. The second packet stream corresponds to the first packet stream and the second buffer comprises a second read pointer. A controller is configured to determine a second position in the second packet stream that corresponds to the first position in the first packet stream and adjust the second read pointer to point to the second position.

TECHNOLOGIES FOR BUFFERING RECEIVED NETWORK PACKET DATA

Technologies for buffering received network packet data include a compute device with a network interface controller (NIC) configured to determine a packet size of a network packet received by the NIC and identify a preferred buffer size between a small buffer and a large buffer. The NIC is further configured to select, from the descriptor, a buffer pointer based on the preferred buffer size, wherein the buffer pointer comprises one of a small buffer pointer corresponding to a first physical address in memory allocated to the small buffer or a large buffer pointer corresponding to a second physical address in memory allocated to the large buffer. Additionally, the NIC is configured to store at least a portion of the network packet in the memory based on the selected buffer pointer. Other embodiments are described herein.

Packet descriptor storage in packet memory with cache

A first memory device stores (i) a head part of a FIFO queue structured as a linked list (LL) of LL elements arranged in an order in which the LL elements were added to the FIFO queue and (ii) a tail part of the FIFO queue. A second memory device stores a middle part of the FIFO queue, the middle part comprising a LL elements following, in an order, the head part and preceding, in the order, the tail part. A queue controller retrieves LL elements in the head part from the first memory device, moves LL elements in the middle part from the second memory device to the head part in the first memory device prior to the head part becoming empty, and updates LL parameters corresponding to the moved LL elements to indicate storage of the moved LL elements changing from the second memory device to the first memory device.