Patent classifications
H04L49/9047
System and method for facilitating efficient address translation in a network interface controller (NIC)
A network interface controller (NIC) capable of facilitating efficient memory address translation is provided. The NIC can be equipped with a host interface, a cache, and an address translation unit (ATU). During operation, the ATU can determine an operating mode. The operating mode can indicate whether the ATU is to perform a memory address translation at the NIC. The ATU can then determine whether a memory address indicated in the memory access request is available in the cache. If the memory address is not available in the cache, the ATU can perform an operation on the memory address based on the operating mode.
System and method for facilitating efficient address translation in a network interface controller (NIC)
A network interface controller (NIC) capable of facilitating efficient memory address translation is provided. The NIC can be equipped with a host interface, a cache, and an address translation unit (ATU). During operation, the ATU can determine an operating mode. The operating mode can indicate whether the ATU is to perform a memory address translation at the NIC. The ATU can then determine whether a memory address indicated in the memory access request is available in the cache. If the memory address is not available in the cache, the ATU can perform an operation on the memory address based on the operating mode.
STREAMING PLATFORM FLOW AND ARCHITECTURE
A system includes a host system and an integrated circuit coupled to the host system through a communication interface. The integrated circuit is configured for hardware acceleration. The integrated circuit includes a direct memory access circuit coupled to the communication interface, a kernel circuit, and a stream traffic manager circuit coupled to the direct memory access circuit and the kernel circuit. The stream traffic manager circuit is configured to control data streams exchanged between the host system and the kernel circuit.
De-duplicating remote procedure calls
A method, computer program product, and a computing system are provided for de-duplicating remote procedure calls at a client. In an implementation, the method may include generating a plurality of local pending remote procedure calls. The method may also include identifying a set of duplicate remote procedure calls among the plurality of remote procedure calls. The method may also include associating each remote procedure call within the set of duplicate remote procedure calls with one another. The method may also include executing a remote procedure call of the set of duplicate remote procedure calls. The method may further include providing a response for the remote procedure call of the set of duplicate remote procedure calls with the other remote procedure calls of the set of duplicate remote procedure calls.
RECEIVE BUFFER MANAGEMENT
Examples described herein can be used to allocate replacement receive buffers for use by a network interface, switch, or accelerator. Multiple refill queues can be used to receive identifications of available receive buffers. A refill processor can select one or more identifications from a refill queue and allocate the identifications to a buffer queue. None of the refill queues is locked from receiving identifications of available receive buffers but merely one of the refill buffers is accessed at a time to provide identifications of available receive buffers. Identifications of available receive buffers from the buffer queue are provide to the network interface, switch, or accelerator to store content of received packets.
Packet processing system, method and device having reduced static power consumption
A buffer logic unit of a packet processing device including a power gate controller. The buffer logic unit for organizing and/or allocating available pages to packets for storing the packet data based on which of a plurality of separately accessible physical memories that pages are associated with. As a result, the power gate controller is able to more efficiently cut off power from one or more of the physical memories.
Disaster recovery of mobile data center via location-aware cloud caching
A method for copying first data stored at a primary data center to a secondary data center is provided. The method includes initiating a first replication task to copy the first data from the primary data center to the secondary data center. The method also includes receiving a first portion of the first data from the primary data center via a first access point, wherein a first bandwidth between the primary data center and the first access point is greater than a second bandwidth between the primary data center and the secondary data center. The method further includes storing the first portion of data in a first cache associated with the first access point. The method also includes transmitting the first portion of data from the first cache to the secondary data center. A system and non-transitory computer-readable medium are also provided.
Techniques for handling message queues
Techniques are disclosed relating to handling queues. A server-based platform, in some embodiments, accesses queue information that includes performance attributes for a plurality of queues storing one or more messages corresponding to one or more applications. In some embodiments, the platform assigns, based on the performance attributes, a corresponding set of the plurality of queues to each of a plurality of processing nodes of the platform. In some embodiments, the assigning of a corresponding set of queues to a given one of the plurality of processing nodes causes instantiation of: a first set of one or more dequeuing threads and a second set of one or more processing threads. The dequeuing threads may be executable to dequeue one or more messages stored in the corresponding set of queues. The processing threads may be executable to perform one or more tasks specified in the dequeued one or more messages.
Multiplexing method for scheduled frames in an ethernet switch
The method comprises the steps of: a) providing a plurality of memory buffers, associated to respective indexes of priority, each buffer comprising one queue of frames having a same index of priority, b) sorting the received frames in a chosen buffer according to their index of priority, c) in each buffer, sorting the frames according to their respective timestamps, for ordering the queue of frames in each buffer from the earliest received frame on top of the queue to the latest received frame at the bottom of the queue, and d) feeding the transmitting ports with each frame or block of frame to transmit, in an order determined according to the index of priority of the frame, as well as an order of the frame or of the block of frame in the queue associated to the index of priority of the frame.
Methods and apparatus for memory resource management in a network device
Packets that are to be transmitted via a plurality of egress interfaces of a network device are stored in a memory of the network device. The packets are stored in a plurality of queues that respectively correspond to the egress interfaces. The network device determines a set of queues, from among the plurality of queues, for which packet dropping is enabled. The network device determines whether a utilization level of the memory meets a threshold. In response to determining that the utilization level of the memory meets the threshold: the network device randomly or pseudorandomly selects a first queue from the set of queues for which packet dropping is enabled, dequeues a first packet from the selected first queue, and deletes the first packet that was dequeued from the selected first queue.