Patent classifications
H04L12/879
Segmentation and reassembly of network packets for switched fabric networks
Reassembly of member cells into a packet comprises receiving an incoming member cell of a packet from a switching fabric wherein each member cell comprises a segment of the packet and a header, generating a reassembly key using selected information from the incoming member cell header wherein the selected information is the same for all member cells of the packet, checking a reassembly table in a content addressable memory to find an entry that includes a logic key matching the reassembly key, and using a content index in the found entry and a sequence number of the incoming member cell within the packet, to determine a location offset in a reassembly buffer area for storing the incoming member cell at said location offset in the reassembly buffer area for the packet for reassembly.
Parallel data streaming between cloud-based applications and massively parallel systems
Embodiments relate to parallel data streaming between a first computer system and a second computer system. Aspects include transmitting a request to establish an authenticated connection between a processing job on the first computer system and a process on the second computer system and transmitting a query to the process on the second computer system over the authenticated connection. Aspects further include creating one or more tasks on the first computer system configured to receive data from the second computer system in parallel and reading data received by the one or more tasks by the processing job on the first computer system.
SYSTEMS AND METHODS FOR VIRTIO BASED OPTIMIZATION OF DATA PACKET PATHS BETWEEN A VIRTUAL MACHINE AND A NETWORK DEVICE FOR LIVE VIRTUAL MACHINE MIGRATION
A new approach is proposed that contemplates systems and methods to support virtio-based data packet path optimization for live virtual machine (VM) migration for Linux. Specifically, a data packet receiving (Rx) path and a data packet transmitting (Tx) path between a VM running on a host and a virtual function (VF) driver configured to interact with a physical network device of the host to receive and transmit communications dedicated to the VM are both optimized to implement a zero-copy solution to reduce overheads in packet processing. Under the proposed approach, the data packet Tx path utilizes a zero-copy mechanism provided by Linux kernel to avoid copying from virtio memory rings/Tx vrings in memory of the VM. The data packet Rx path also implements a zero-copy solution, which allows a virtio device of the VM to communicate directly with the VF driver of the network device while bypassing a macvtap driver entirely from the data packet Rx path.
Low latency device interconnect using remote memory access with segmented queues
A writing application on a computing device can reference a tail pointer to write messages to message buffers that a peer-to-peer data link replicates in memory of another computing device. The message buffers are divided into at least two queue segments, where each segment has several buffers. Messages are read from the buffers by a reading application on one of the computing devices using an advancing head pointer by reading a message from a next message buffer when determining that the next message buffer has been newly written. The tail pointer is advanced from one message buffer to another within a same queue segment after writing messages. The tail pointer is advanced from a message buffer of a current queue segment to a message buffer of a next queue segment when determining that the head pointer does not indicate any of the buffers of the next queue segment.
Data transfer, synchronising applications, and low latency networks
Data transfer, synchronizing applications, and low latency networks are disclosed. An example method includes maintaining a first buffer in a first computing device, the first buffer to receive discrete units of data from a second computing device; maintaining a second buffer in the first computing device, the second buffer to store size data identifying a size of respective ones of the discrete units of data received from the second computing device; and reading from the first buffer according to a first value of a first pointer and a corresponding one of the sizes stored in the second buffer.
Packet buffer comprising a data section and a data description section
The present invention relates to a data buffer memory (104) and method for storing data in a data communications network, and to a data buffer system (100) comprising such a data buffer memory. The data buffer memory comprising a data section (104a′) comprising a number of memory pages (104a); and a package descriptor section (104b′) comprising a 5 number of package descriptors (104b); wherein at least one queue (103) of packets is stored in the data section (104a′) as an ordered set of packages, and wherein a package is an ordered set of packets.
DISTRIBUTED HIERARCHIAL CACHE MANAGEMENT SYSTEM AND METHOD
A method for managing a cache memory network according to a distributed hierarchical cache model is disclosed. The distributed hierarchical cache model includes a plurality of cache levels each corresponding to a delivery rank for delivering data content to users of the cache memory network. The method includes dividing at least one cache memory of the cache memory network into a plurality of cache segments, mapping each of the cache segments to a cache level of the distributed hierarchical cache model, and performing cache management operations over the cache memory network according to the distributed hierarchical cache model.
Reception according to a data transfer protocol of data directed to any of a plurality of destination entities
A data processing system arranged for receiving over a network, according to a data transfer protocol, data directed to any of a plurality of destination identities, the data processing system comprising: data storage for storing data received over the network; and a first processing arrangement for performing processing in accordance with the data transfer protocol on received data in the data storage, for making the received data available to respective destination identities; and a response former arranged for: receiving a message requesting a response indicating the availability of received data to each of a group of destination identities; and forming such a response; wherein the system is arranged to, in dependence on receiving the said message.
METHOD AND APPARATUS FOR PROCESSING DATA PACKETS, DEVICE, AND STORAGE MEDIUM
Provided are method and apparatus for processing data packets, a device, and a storage medium that relate to the field of communications. The method includes: receiving multiple data packets of an identical service transmitted in multiple frequency bands, where each of the data packets carries arrangement indication information; and sorting the data packets based on the arrangement indication information carried in each of the data packets.
AUGMENTING DATA PLANE FUNCTIONALITY WITH FIELD PROGRAMMABLE INTEGRATED CIRCUITS
Some embodiments use one or more FPGAs and external memories associated with the FPGAs to implement large, hash-addressable tables for a data plane circuit. These embodiments configure at least one message processing stage of the DP circuit to store (1) a first plurality of records for matching with a set of data messages received by the DP circuit, and (2) a redirection record redirecting data messages that do not match the first plurality of records to a DP egress port associated with the memory circuit. These embodiments configure an external memory circuit to store a larger, second set of records for matching with redirected data messages received through the DP egress port associated with the memory circuit. This external memory circuit is a hash-addressable memory in some embodiments. To determine whether a redirected data message matches a record in the second set of record, the method of some embodiments configures an FPGA associated with the hash-addressable external memory to use a collision free hash process to generate a collision-free, hash address value from a set of attributes of the data message. This hash address value specifies an address in the external memory for the record in the second set of records to compare with the redirected data message.