Patent classifications
H04L49/9047
Buffer allocation for parallel processing of data by message passing interface (MPI)
Examples described herein relate to receiving, at a network interface, an allocation of a first group of one or more buffers to store data to be processed by a Message Passing Interface (MPI) and based on a received packet including an indicator that permits the network interface to select a buffer for the received packet and store the received packet in the selected buffer, the network interface storing a portion of the received packet in a buffer of the first group of the one or more buffers. The indicator can permit the network interface to select a buffer for the received packet and store the received packet in the selected buffer irrespective of a tag and sender associated with the received packet. In some examples, based on a received packet including an indicator that does not permit storage of the received packet in a buffer irrespective of a tag and source associated with the second received packet, the network interface is to store a portion of the second received packet in a buffer of the second group of one or more buffers, wherein the buffer of the second group of one or more buffers corresponds to a tag and source associated with the second received packet.
Buffer allocation for parallel processing of data by message passing interface (MPI)
Examples described herein relate to receiving, at a network interface, an allocation of a first group of one or more buffers to store data to be processed by a Message Passing Interface (MPI) and based on a received packet including an indicator that permits the network interface to select a buffer for the received packet and store the received packet in the selected buffer, the network interface storing a portion of the received packet in a buffer of the first group of the one or more buffers. The indicator can permit the network interface to select a buffer for the received packet and store the received packet in the selected buffer irrespective of a tag and sender associated with the received packet. In some examples, based on a received packet including an indicator that does not permit storage of the received packet in a buffer irrespective of a tag and source associated with the second received packet, the network interface is to store a portion of the second received packet in a buffer of the second group of one or more buffers, wherein the buffer of the second group of one or more buffers corresponds to a tag and source associated with the second received packet.
Allocation of shared reserve memory
A device includes ports, a packet processor, and a memory management circuit. The ports communicate packets over a network. The packet processor processes the packets using queues. The memory management circuit maintains a shared buffer in a memory and adaptively allocates memory resources from the shared buffer to the queues, maintains in the memory, in addition to the shared buffer, a shared-reserve memory pool for use by the queues, identifies, among the queues, a queue that requires additional memory resources, the queue having an occupancy that is (i) above a current value of a dynamic threshold, rendering the queue ineligible for additional allocation from the shared buffer, and (ii) no more than a defined margin above the current value of the dynamic threshold, rendering the queue eligible for allocation from the shared-reserve memory pool, and allocates memory resources to the identified queue from the shared-reserve memory pool.
Allocation of shared reserve memory
A device includes ports, a packet processor, and a memory management circuit. The ports communicate packets over a network. The packet processor processes the packets using queues. The memory management circuit maintains a shared buffer in a memory and adaptively allocates memory resources from the shared buffer to the queues, maintains in the memory, in addition to the shared buffer, a shared-reserve memory pool for use by the queues, identifies, among the queues, a queue that requires additional memory resources, the queue having an occupancy that is (i) above a current value of a dynamic threshold, rendering the queue ineligible for additional allocation from the shared buffer, and (ii) no more than a defined margin above the current value of the dynamic threshold, rendering the queue eligible for allocation from the shared-reserve memory pool, and allocates memory resources to the identified queue from the shared-reserve memory pool.
NETWORK INTERFACE AND BUFFER CONTROL METHOD THEREOF
A network interface includes a processor, memory, and a cache between the processor and the memory. The processor secures a plurality of buffers for storing transfer data in the memory, and manages an allocation order of available buffers of the plurality of buffers. The processor returns a buffer released after data transfer to a position before a predetermined position of the allocation order.
STORAGE SYSTEM AND METHOD FOR TRANSFERRING DATA THEREOF
A storage system includes a plurality of storage controllers. The storage controller includes a processor, a memory, and a transfer device that processes control data for controlling an internal operation of the storage system, the control data being transmitted and received between the plurality of storage controllers. The processor accumulates the control data in the memory when a transfer request for the control data is generated, generates a write request for transmitting a plurality of the control data stored in the memory, and transmits the write request to the other storage controller. The transfer device writes a plurality of the control data included in the write request to the memory upon receiving the write request.
STORAGE SYSTEM AND METHOD FOR TRANSFERRING DATA THEREOF
A storage system includes a plurality of storage controllers. The storage controller includes a processor, a memory, and a transfer device that processes control data for controlling an internal operation of the storage system, the control data being transmitted and received between the plurality of storage controllers. The processor accumulates the control data in the memory when a transfer request for the control data is generated, generates a write request for transmitting a plurality of the control data stored in the memory, and transmits the write request to the other storage controller. The transfer device writes a plurality of the control data included in the write request to the memory upon receiving the write request.
PACKET PROCESSING METHOD AND NETWORK DEVICE
A packet processing method includes: allocating a portion of storage space in a memory circuit as a storage pool including first storage blocks; storing a packet in one of the first storage blocks when a data size of the packet is less than or equal to a predetermined value, and releasing the one of the first storage blocks to the storage pool after the packet is processed; requesting an increase in a number of the first storage blocks from a kernel when a number of remaining storage blocks in the first storage blocks that do not store data is less than a threshold value; and requesting second storage block from the kernel to increase a data capacity of the storage pool to store the packet when the data size is greater than the predetermined value, and releasing the second storage block to the kernel after the packet is processed.
PACKET PROCESSING METHOD AND NETWORK DEVICE
A packet processing method includes: allocating a portion of storage space in a memory circuit as a storage pool including first storage blocks; storing a packet in one of the first storage blocks when a data size of the packet is less than or equal to a predetermined value, and releasing the one of the first storage blocks to the storage pool after the packet is processed; requesting an increase in a number of the first storage blocks from a kernel when a number of remaining storage blocks in the first storage blocks that do not store data is less than a threshold value; and requesting second storage block from the kernel to increase a data capacity of the storage pool to store the packet when the data size is greater than the predetermined value, and releasing the second storage block to the kernel after the packet is processed.
SYSTEM AND METHOD FOR FACILITATING EFFICIENT LOAD BALANCING IN A NETWORK INTERFACE CONTROLLER (NIC)
A network interface controller (NIC) capable of efficient load balancing among the hardware engines is provided. The NIC can be equipped with a plurality of ordering control units (OCUs), a queue, a selection logic block, and an allocation logic block. The selection logic block can determine, from the plurality of OCUs, an OCU for a command from the queue, which can store one or more commands. The allocation logic block can then determine a selection setting for the OCU, select an egress queue for the command based on the selection setting, and send the command to the egress queue.