Patent classifications
H04L12/883
High-performance garbage collection in a network device
Efficient garbage collection techniques for network packets and other units of data are described. Constituent portions of a data unit are stored in buffer entries spread out across multiple distinct banks. Linking data is generated and stored on a per-bank basis. The linking data defines, for each bank in which data for the data unit is stored, a chain of all entries in that bank that store data for the data unit. When the data unit is dropped or otherwise disposed of, a chain's head entry address may be placed in a garbage collection list for the corresponding bank. A garbage collector uses the linking data to gradually follow the chain of entries for the given bank, and frees each entry in the chain along the way. Optionally, certain addresses in the chain, including each chain's tail address, are immediately freed for the corresponding bank, without waiting to follow the chain.
GUARANTEED DELIVERY IN RECEIVER SIDE OVERCOMMITTED COMMUNICATION ADAPTERS
Aspects of the invention include receiving an input/output (I/O) request that includes a data stream from a host processor. The receiving is at a network adapter of a storage controller that manages storage for the host processor. The storage controller includes a storage buffer to store data received from the host processor before migrating it to the storage. The storage controller also includes a data cache. It is determined whether the storage buffer has enough free space to store the received data stream. Based at least in part on determining that the storage buffer has enough free space to store the received data stream, the received data stream is stored by the network adapter in the storage. Based at least in part on determining that the storage buffer does not have enough free space to store the received data stream, the received data stream is stored in the data cache.
Technologies for jitter-adaptive low-latency, low power data streaming between device components
Technologies for low-latency data streaming include a computing device having a processor that includes a producer and a consumer. The producer generates a data item, and in a local buffer producer mode adds the data item to a local buffer, and in a remote buffer producer mode adds the data item to a remote buffer. When the local buffer is full, the producer switches to the remote buffer producer mode, and when the remote buffer is below a predetermined low threshold, the producer switches to the local buffer producer mode. The consumer reads the data item from the local buffer while operating in a local buffer consumer mode and reads the data item from the remote buffer while operating in a remote buffer consumer mode. When the local buffer is above a predetermined high threshold, the consumer may switch to a catch-up operating mode. Other embodiments are described and claimed.
Packet deadlines in a queue to control the age of information
Systems and method are provided for controlling the age of information, or information freshness, for remote status updating through the use of a packet deadline. In an embodiment, a packet deadline determines how long a packet is allowed to wait in a queue at the source before being transmitted, and, if the deadline expires, it is dropped from the system and never transmitted. This mechanism can control the flow of packets into the system, which can be used to control and optimize the age of information, thus providing a fresh status at a monitor.
Packet Processing Method and Apparatus
A packet processing method includes: obtaining, at a Medium Access Control (MAC) layer, a first fragmented data frame included in a first data frame; buffering the first fragmented data frame into a first queue; obtaining, at the MAC layer, a second fragmented data frame included in a second data frame; buffering the second fragmented data frame into a second queue; sending the first fragmented data frame to a forwarding processing module; obtaining first forwarding information using the forwarding processing module; sending the second fragmented data frame to the forwarding processing module after sending the first fragmented data frame to the forwarding processing module; and obtaining second forwarding information.
APPARATUS AND BUFFER CONTROL METHOD THEREOF IN WIRELESS COMMUNICATION SYSTEM
A 5G communication system or pre-5G communication system for supporting a higher data rate than that of a beyond 4G communication system such as an LTE is provided. A method by an apparatus for controlling buffers in a wireless communication system comprises storing information related to a packet in at least one of a first buffer or a second buffer, transmitting data generated based on the packet, and, when an acknowledgement signal is received for the data, discarding the information.
Method and apparatus for using multiple linked memory lists
An apparatus and method for queuing data to a memory buffer. The method includes selecting a queue from a plurality of queues; receiving a token of data from the selected queue and requesting, by a queue module, addresses and pointers from a buffer manager for addresses allocated by the buffer manager for storing the token of data. Subsequently, a memory list is accessed by the buffer manager and addresses and pointers are generated to allocated addresses in the memory list which comprises a plurality of linked memory lists for additional address allocation. The method further includes writing into the accessed memory list the pointers for the allocated address where the pointers link together allocated addresses; and migrating to other memory lists for additional address allocations upon receipt of subsequent tokens of data from the queue; and generating additional pointers linking together the allocated addresses in the other memory lists.
TECHNOLOGIES FOR DYNAMIC MULTI-CORE NETWORK PACKET PROCESSING DISTRIBUTION
Technologies for dynamic multi-core packet processing distribution include a compute device having a distributor core, a direct memory access (DMA) engine, and multiple worker cores. The distributor core writes work data to a distribution buffer. The work data is associated with a packet processing operation. The distributor core may perform a work distribution operation to generate the work data. The work data may be written to a private cache of the distributor core. The distributor core programs the DMA engine to copy the work data from the distribution buffer to a shadow buffer. The DMA engine may copy the work data from one cache line of a shared cache to another cache line of the shared cache. The worker cores access the work data in the shadow buffer. The worker cores may perform the packet processing operation with the work data. Other embodiments are described and claimed.
Hierarchical packet buffer system
A switching device includes a primary memory and an traffic manager. The primary memory buffers packets for temporary storage. The traffic manager monitors consumed resources in the device related to the buffering of packets in the primary memory. The traffic manager migrates packets buffered in the primary memory to a secondary memory when the consumed resources exceed a certain threshold. The traffic manager also controls dequeuing of the packets from the primary memory and the secondary memory.
Multi-processor computing systems
A multi-processor computing system comprising a second processing device to generate outgoing data packets and comprising a second network stack to save the outgoing data packets in a second outgoing packet buffer of the second processing device. A second network driver to save an outgoing buffer pointer in a second transmission ring of the second processing device, the outgoing buffer pointer corresponding to the second outgoing packet buffer. A first processing device comprising a first network driver to move the outgoing buffer pointer from the second transmission ring to a send ring in the first processing device. A network interface controller (NIC) to obtain the outgoing buffer pointer from the send ring. The NIC to copy the outgoing data packets from the second outgoing packet buffer to a transmission queue of the NIC. The NIC to transmit the outgoing data packets to another computing system over a communication network.