H04L49/9084

QUEUE PROTECTION USING A SHARED GLOBAL MEMORY RESERVE
20230145162 · 2023-05-11 ·

The subject technology relates to the management of a shared buffer memory in a network switch. Systems, methods, and machine readable media are provided for receiving a data packet at a first network queue from among a plurality of network queues, determining if a fill level of a queue in a shared buffer of the network switch exceeds a dynamic queue threshold, and in an event that the fill level of the shared buffer exceeds the dynamic queue threshold, determining if a fill level of the first network queue is less than a static queue minimum threshold.

Egress packet processing using a modified packet header separate from a stored payload

A network device includes a packet processor that: determines at least one egress port via which a received packet is to be transmitted by the network device; modifies one or more fields in a header of the packet to generate a modified header; determines, based at least in part on the modified header, whether the packet a) is to be transmitted or b) is to be discarded; and stores the modified header in a packet memory. In response to the determination that the packet is to be transmitted, a transmit processor of the network device: retrieves a payload of the packet from the packet memory; retrieves the modified header from the packet memory; generates a transmit packet at least by combining the payload of the packet with the modified header; and transmits the transmit packet via the determined at least one egress port of the network device.

Packet Processing Method and Apparatus, Communications Device, and Switching Circuit
20220329544 · 2022-10-13 ·

A packet processing method includes: a first device receives a packet from a second device; the first device determines a first queue buffer used to store the packet, and determines a first upper limit value of the first queue buffer based on an available value of a first port buffer and an available value of a global buffer, where the global buffer includes at least one port buffer, the first port buffer is one of the at least one port buffer, the first port buffer includes at least one queue buffer, and the first queue buffer is one of the at least one queue buffer. The first device processes the packet based on the first upper limit value of the first queue buffer, an occupation value of the first queue buffer, and a size of the packet.

Application and network aware adaptive compression for better QoE of latency sensitive applications

This disclosure is directed to embodiments of systems and methods for performing compression of data in a queue. A device intermediary between a client and a server may determine that a length of time to move existing data maintained in a queue from the queue exceeds a predefined threshold. The device may identify, responsive to the determination, a first quantity of the existing data to undergo compression, and a second quantity of the existing data according to a compression ratio of the compression. The device may reserve, according to the second quantity, a first portion of the queue that maintained the first quantity of the existing data, to place compressed data obtained from applying the compression on the first quantity of the existing. The device may place incoming data into the queue beyond the reserved first portion of the queue.

Packet fragment processing method and apparatus and system

This application provides a packet fragment processing method and apparatus and a system, to reduce occupancy of a storage resource of a network device. The method includes: receiving, by a network device, a first packet fragment set from first user equipment, where the first packet fragment set includes a plurality of packet fragments; and sending, by the network device, the first packet fragment set to a server.

Packet processing method and apparatus, communications device, and switching circuit

A packet processing method includes: a first device receives a packet from a second device; the first device determines a first queue buffer used to store the packet, and determines a first upper limit value of the first queue buffer based on an available value of a first port buffer and an available value of a global buffer, where the global buffer includes at least one port buffer, the first port buffer is one of the at least one port buffer, the first port buffer includes at least one queue buffer, and the first queue buffer is one of the at least one queue buffer. The first device processes the packet based on the first upper limit value of the first queue buffer, an occupation value of the first queue buffer, and a size of the packet.

Remote memory management
11379404 · 2022-07-05 · ·

Remote memory management of the memory of a consumer computer by a producer computer is described. A system is described that can include a first computer, and a second computer communicatively coupled to the first computer via a remote direct memory access enabled communication network. The first computer can include a first operating system. The second computer can include a second operating system and a second memory. The second memory can include a plurality of buffers. The first computer can remotely manage the plurality of buffers of the second memory of the second computer without involving either the first operating system or the second operating system. The managing can further include the first computer identifying available buffers amongst the plurality of buffers. Related methods, apparatuses, articles, non-transitory computer program products, non-transitory computer readable media are also within the scope of this disclosure.

POWER THROTTLE FOR NETWORK SWITCHES

The disclosed systems and methods provide methods and systems for providing power throttling adapted for high performance network switches. A method includes determining, for each of a plurality of measurement periods within a thermal average period, an energy usage estimate for a packet processing block configured to process ingress packets at a power gated clock rate. The method includes determining, for each of the plurality of measurement periods, a target clock rate for the packet processing block based on the determined energy usage estimates to meet a target energy value that is averaged for the thermal average period. The method includes adjusting, for each of the plurality of measurement periods, the power gated clock rate towards the target clock rate, wherein the adjusting causes the packet processing block to process the ingress packets at the adjusted power gated clock rate.

Electronic control unit, abnormality determination program, and abnormality determination method
11444891 · 2022-09-13 · ·

An electronic control unit includes a receiver that receives a data frame transmitted at given transmission periods from a transmission source electronic control unit connected via a communication network, a buffer capable of storing the data frame, a writer that writes the data frame received by the receiver into the buffer, and an abnormality determiner that determines that the data frame is abnormal when the number of data frames written into the buffer exceeds a given threshold or when the data frame is written in excess of a capacity of the buffer.

Hierarchical statistically multiplexed counters and a method thereof

Embodiments of the present invention relate to an architecture that uses hierarchical statistically multiplexed counters to extend counter life by orders of magnitude. Each level includes statistically multiplexed counters. The statistically multiplexed counters includes P base counters and S subcounters, wherein the S subcounters are dynamically concatenated with the P base counters. When a row overflow in a level occurs, counters in a next level above are used to extend counter life. The hierarchical statistically multiplexed counters can be used with an overflow FIFO to further extend counter life.