H04L49/9047

Allocation of shared reserve memory
20240195754 · 2024-06-13 ·

A device includes ports, a packet processor, and a memory management circuit. The ports communicate packets over a network. The packet processor processes the packets using queues. The memory management circuit maintains a shared buffer in a memory and adaptively allocates memory resources from the shared buffer to the queues, maintains in the memory, in addition to the shared buffer, a shared-reserve memory pool for use by the queues, identifies, among the queues, a queue that requires additional memory resources, the queue having an occupancy that is (i) above a current value of a dynamic threshold, rendering the queue ineligible for additional allocation from the shared buffer, and (ii) no more than a defined margin above the current value of the dynamic threshold, rendering the queue eligible for allocation from the shared-reserve memory pool, and allocates memory resources to the identified queue from the shared-reserve memory pool.

Packet processing device and packet processing method

The packet processing apparatus includes a packet memory, a transmission processing unit that writes a plurality of packets to be transmitted to the packet memory to generate a combination packet into which the plurality of packets have been concatenated, a line handling unit that sends packets to a communication line, and a combination packet transfer unit that DMA-transfers the combination packet from the packet memory to the line handling unit. The transmission processing unit writes information on an address in the packet memory of beginning data of an individual packet in the combination packet to a descriptor. The line handling unit separates the DMA-transferred combination packet into a plurality of packets and sends the plurality of packets to the communication line.

Packet processing device and packet processing method

The packet processing apparatus includes a packet memory, a transmission processing unit that writes a plurality of packets to be transmitted to the packet memory to generate a combination packet into which the plurality of packets have been concatenated, a line handling unit that sends packets to a communication line, and a combination packet transfer unit that DMA-transfers the combination packet from the packet memory to the line handling unit. The transmission processing unit writes information on an address in the packet memory of beginning data of an individual packet in the combination packet to a descriptor. The line handling unit separates the DMA-transferred combination packet into a plurality of packets and sends the plurality of packets to the communication line.

Systems and methods for on the fly routing in the presence of errors

Systems and methods are provided for on the fly routing of data transmissions in the presence of errors. Switches can establish flow channels corresponding to flows in the network. In response to encountering a critical error on a network link along a transmission path, a switch can generate an error acknowledgement. The switch can transmit the error acknowledgements to ingress ports upstream from the network link via the plurality of flow channels. By transmitting the error acknowledgement, it indicates that the network link where the critical error was encountered is a failed link to ingress ports upstream from the failed link. Subsequently, each ingress port upstream from the failed link can dynamically update the path of the plurality of flows that are upstream from the failed link such that the plurality of flows that are upstream from the failed link are routed in a manner that avoids the failed link.

Systems and methods for on the fly routing in the presence of errors

Systems and methods are provided for on the fly routing of data transmissions in the presence of errors. Switches can establish flow channels corresponding to flows in the network. In response to encountering a critical error on a network link along a transmission path, a switch can generate an error acknowledgement. The switch can transmit the error acknowledgements to ingress ports upstream from the network link via the plurality of flow channels. By transmitting the error acknowledgement, it indicates that the network link where the critical error was encountered is a failed link to ingress ports upstream from the failed link. Subsequently, each ingress port upstream from the failed link can dynamically update the path of the plurality of flows that are upstream from the failed link such that the plurality of flows that are upstream from the failed link are routed in a manner that avoids the failed link.

TRAFFIC AND LOAD AWARE DYNAMIC QUEUE MANAGEMENT

Some embodiments provide a queue management system that efficiently and dynamically manages multiple queues that process traffic to and from multiple virtual machines (VMs) executing on a host. This system manages the queues by (1) breaking up the queues into different priority pools with the higher priority pools reserved for particular types of traffic or VM (e.g., traffic for VMs that need low latency), (2) dynamically adjusting the number of queues in each pool (i.e., dynamically adjusting the size of the pools), (3) dynamically reassigning a VM to a new queue based on one or more optimization criteria (e.g., criteria relating to the underutilization or overutilization of the queue).

Network interface
10284672 · 2019-05-07 · ·

A low-latency network interface and complementary data management protocols are disclosed in this specification. The data management protocols reduce dedicated control exchanges between the network interface and a corresponding host computing system by consolidating control data with network data. The network interface may also facilitate port forwarding and data logging without an external network switch.

Technologies for multi-core wireless network data transmission

Technologies for multi-core wireless data transmission include a computing device having a processor with multiple cores and a wireless network interface controller (NIC). The computing device establishes multiple transmission queues that are each associated with a processor core. A driver receives a packet for transmission from an application in the execution context of the application, determines a current processor core of the execution context, adds metadata to the packet indicative of the current core, and enqueues the packet in the transmission queue associated with the current core. The wireless NIC merges the packet with packet data from the other transmission queues, adds a sequence number to each packet, and transmits each packet. The wireless NIC may determine the current processor core based on the metadata of the packet and raise an interrupt to the current processor core in response to transmitting the packet. Other embodiments are described and claimed.

High performance network I/O in a virtualized environment
10270715 · 2019-04-23 · ·

From received data packets intended for a target virtual machine of a virtualization system, a destination network address of the target virtual machine is determined, and a current write buffer pointer is identified that points to a buffer associated with the identified target virtual machine corresponding to the destination network address. If the identified write buffer pointer indicates that the buffer has sufficient available space to accept the data packets, and if the associated buffer has sufficient available space, the data packets are placed in the associated buffer in buffer data locations according to a calculated new write buffer pointer value, and a wakeup byte data message is sent to a designated socket of the target virtual machine. Generally, the target virtual machine detects the wakeup byte data message at the designated socket and, in response, retrieves the data packets from the associated buffer in accordance with the new write buffer pointer value.

Buffer assignment balancing in a network device

Techniques for more optimally balancing operations across a set of buffers, such as in buffering packets in a network device or in other contexts, are disclosed. The techniques make use of an ordered list of buffers from which the next available buffer is selected for each operation, as needed. The buffers are first prioritized based on the state(s) of the relevant buffers and/or other factors. The resulting ordered list is then processed using re-ordering logic. This re-ordering logic may, for example, randomly or pseudo-randomly trade the positions of various sets of buffers within the prioritized list. Among other effects, the re-ordering logic thus reduces buffer skew problems from delayed propagation of buffer state information and other issues. In an embodiment, the re-ordering logic is divided into multiple levels of processing, with each level separately passing through the list. Each level of processing may utilize differently configured re-ordering logic.