H04L49/9026

DETERMINISTIC AND EFFICIENT MESSAGE PACKET MANAGEMENT

Methods, devices, and systems for facilitation of efficient processing of a plurality of electronic message packets communicated to an application via a network from a plurality of message sources. The facilitation involves receiving each of the plurality of electronic message packets from the network, and storing, upon receipt thereof, each of the received electronic message packets in a single buffer irrespective of which message source of the plurality of message sources each of the received electronic message packets originated from, the single buffer being accessible by the application.

Queue management method and apparatus

A queue management method and apparatus are disclosed. The queue management method includes: storing a first packet to a first buffer cell included in a first macrocell, where the first macrocell is enqueued to a first entity queue, the first macrocell includes N consecutive buffer cells, and the first buffer cell belongs to the N buffer cells; correcting, based on a packet length of the first packet, an average packet length in the first macrocell that is obtained before the first packet is stored, to obtain a current average packet length in the first macrocell; and generating, based on the first macrocell and the first entity queue, queue information corresponding to the first macrocell of the first macrocell in the first entity queue, a head pointer in the first macrocell, a tail pointer in the first macrocell, and the current average packet length in the first macrocell.

Reception apparatus and communication apparatus
10897426 · 2021-01-19 · ·

The present invention provides a reception apparatus constituting an information-control central apparatus that is a communication apparatus that performs one-to-many communication with devices as a plurality of counterpart communication apparatuses. The reception apparatus includes a packet processor that processes a received packet and a buffer that, in response to input of a packet received from a counterpart communication apparatus, adjusts an interval between packets to be sequentially inputted to the packet processor to a specified value or more and outputs the input packet to the packet processor, thereby to avoid occurrence of packet loss.

Deterministic and efficient message packet management

Methods, devices, and systems for facilitation of efficient processing of a plurality of electronic message packets communicated to an application via a network from a plurality of message sources. The facilitation involves receiving each of the plurality of electronic message packets from the network, and storing, upon receipt thereof, each of the received electronic message packets in a single buffer irrespective of which message source of the plurality of message sources each of the received electronic message packets originated from, the single buffer being accessible by the application.

Highly parallel programmable packet editing engine with a scalable control infrastructure

A highly parallel programmable packet editing engine with a scalable control infrastructure includes receiving an ingress packet having one or more headers; assigning, by one or more processors, the one or more headers of the ingress packet to a number of zones, wherein each zone is a grouping of adjacent headers that are closely related to one another by information content or processing type; performing, by the one or more processors, offset computations for the one or more headers in a zone concurrently with offset computations of headers assigned to other zones; performing, by the one or more processors, different header operations on the one or more headers concurrently with respective ones of a plurality of editing engines; combining, by the one or more processors, the edited one or more headers at the computed offsets to generate a modified egress packet; and providing, for transmission, the modified egress packet.

Adjusting buffer size for network interface controller
10608963 · 2020-03-31 · ·

Systems and methods for adjusting the receive buffer size for network interface controllers. An example method may comprise: maintaining, by a computer system, a moving window referencing a pre-defined number of incoming data packets; responsive to receiving a new data packet, shifting the moving window to include the new data packet while excluding a least recently received data packet; calculating a weighted average value of sizes the incoming data packets referenced by the moving window, wherein a most recently received data packet is associated with a first weight that is higher that a second weight associated with a least recently received data packet; and adjusting, using the weighted average value, a size of a buffer allocated for incoming data packets.

Technologies for buffering received network packet data

Technologies for buffering received network packet data include a compute device with a network interface controller (NIC) configured to determine a packet size of a network packet received by the NIC and identify a preferred buffer size between a small buffer and a large buffer. The NIC is further configured to select, from the descriptor, a buffer pointer based on the preferred buffer size, wherein the buffer pointer comprises one of a small buffer pointer corresponding to a first physical address in memory allocated to the small buffer or a large buffer pointer corresponding to a second physical address in memory allocated to the large buffer. Additionally, the NIC is configured to store at least a portion of the network packet in the memory based on the selected buffer pointer. Other embodiments are described herein.

Buffer assignment balancing in a network device
10587536 · 2020-03-10 · ·

Techniques for improved handling of queues of data units are described, such as queues of buffered data units of differing types and/or sources within a switch or other network device. When the size of a queue surpasses the state entry threshold for a certain state, the queue is said to be in the certain state. While in the certain state, data units assigned to the queue may be handled differently in some respect, such as being marked or being dropped without further processing. The queue remains in this certain state until its size falls below the state release threshold for the state. The state release threshold is adjusted over time in, for example, a random or pseudo-random manner. Among other aspects, in some embodiments, this adjustment of the state release threshold addresses fairness issues that may arise with respect to the treatment of different types or sources of data units.

Packet processing of streaming content in a communications network

Aspects of present disclosure include devices within a transmission path of streamed content forwarding received data packets of the stream to the next device or hop in the path prior to buffering the data packet at the device. In this method, typical buffering of the data stream may therefore occur at the destination device for presentation at a consuming device, while the devices along the transmission path may transmit a received packet before buffering. Further, devices along the path may also buffer the content stream after forwarding to fill subsequent requests for dropped data packets of the content stream. Also, in response to receiving the request for the content stream, a device may first transmit a portion of the contents of the gateway buffer to the requesting device to fill a respective buffer at the receiving device.

Integrated network switch operation

A system, method, and computer program product for implementing network state processing is provided. The method includes detecting operational states for ports of a server Internet protocol (IP) data plane component of an integrated switching device. Each operational state is analyzed and matching and action rules associated with the operational states are generated with respect to data packets arriving at the ports. Data describing each operational state is stored within a port cache structure of a port. An incoming data packet is detected at a first port and the matching and action rules are distributed between port engines of the ports. The matching and action rules are executed with respect to the incoming data packet and the incoming data packet is transmitted to a destination port. Operational functionality of the integrated switching device is enabled with respect to execution of the incoming data packet at the destination port.