Patent classifications
H04L49/9047
Reducing interrupts using buffering for data processing
Techniques are described herein that are capable of reducing interrupts using buffering for data processing. In a first example, information is received at an operating system from an application. The information indicates multiple buffers, including a triggering buffer to trigger an interrupt from hardware. Portions of the data are stored in the respective buffers. A schedule for processing the buffers is provided to the hardware. The schedule indicates that the interrupt is to be delayed until the triggering buffer is processed by the hardware. In a second example, a network interface controller is configured to provide one interrupt to the operating system for each of multiple subsets of network packets that is processed by the network interface controller. Each subset includes a number of the network packets that is greater than one. The network packets are associated with a common network socket.
Reducing interrupts using buffering for data processing
Techniques are described herein that are capable of reducing interrupts using buffering for data processing. In a first example, information is received at an operating system from an application. The information indicates multiple buffers, including a triggering buffer to trigger an interrupt from hardware. Portions of the data are stored in the respective buffers. A schedule for processing the buffers is provided to the hardware. The schedule indicates that the interrupt is to be delayed until the triggering buffer is processed by the hardware. In a second example, a network interface controller is configured to provide one interrupt to the operating system for each of multiple subsets of network packets that is processed by the network interface controller. Each subset includes a number of the network packets that is greater than one. The network packets are associated with a common network socket.
FAT TREE ADAPTIVE ROUTING
Systems and methods are provided for efficiently routing data through a network having a plurality of switches configured in a fat-tree topology, including: receiving a data transmission comprising a plurality of packets at an edge port of the network, and routing the data transmission through the network with routing decisions based upon a routing table, wherein the routing table includes entries to effect routing decisions based upon a destination based hash function.
FAT TREE ADAPTIVE ROUTING
Systems and methods are provided for efficiently routing data through a network having a plurality of switches configured in a fat-tree topology, including: receiving a data transmission comprising a plurality of packets at an edge port of the network, and routing the data transmission through the network with routing decisions based upon a routing table, wherein the routing table includes entries to effect routing decisions based upon a destination based hash function.
DRAGONFLY ROUTING WITH INCOMPLETE GROUP CONNECTIVITY
Systems and methods are provided for managing a data communication within a multi-level network having a plurality of switches organized as groups, with each group coupled to all other groups via global links, including: at each switch within the network, maintaining a global fault table identifying the links which lead only to faulty global paths, and when the data communication is received at a port of a switch, determine a destination for the data communication and, route the communication across the network using the global fault table to avoid selecting a port within the switch that would result in the communication arriving at a point in the network where its only path forward is across a global link that is faulty; wherein the global fault table is used for both a global minimal routing methodology and a global non-minimal routing methodology.
DRAGONFLY ROUTING WITH INCOMPLETE GROUP CONNECTIVITY
Systems and methods are provided for managing a data communication within a multi-level network having a plurality of switches organized as groups, with each group coupled to all other groups via global links, including: at each switch within the network, maintaining a global fault table identifying the links which lead only to faulty global paths, and when the data communication is received at a port of a switch, determine a destination for the data communication and, route the communication across the network using the global fault table to avoid selecting a port within the switch that would result in the communication arriving at a point in the network where its only path forward is across a global link that is faulty; wherein the global fault table is used for both a global minimal routing methodology and a global non-minimal routing methodology.
Low-latency processing in a network node
A method in a network node that includes a host and an accelerator, includes holding a work queue that stores work elements, a notifications queue that stores notifications of the work elements, and control indices for adding and removing the work elements and the notifications to and from the work queue and the notifications queue, respectively. The notifications queue resides on the accelerator, and at least some of the control indices reside on the host. Messages are exchanged between a network and the network node using the work queue, the notifications queue and the control indices.
METHODS AND ARRANGEMENTS TO ACCELERATE ARRAY SEARCHES
Logic may store at least a portion of an incoming packet at a memory location in a host device in response to a communication from the host device. Logic may compare the incoming packet to a digest in an entry of a primary array. When the incoming packet matches the digest, logic may retrieve a full entry from the secondary array and compare the full entry with the first incoming packet. When the full entry matches the first incoming packet, logic may store at least a portion of the first incoming packet at the memory location. And, in the absence of a match between the first incoming packet and the digest or full entry, logic may compare the first incoming packet to subsequent entries in the primary array to identify a full entry in the secondary array that matches the first incoming packet.
System and method for implementing virtualized network functions with a shared memory pool
A method and system for implementing virtualized network functions (VNFs) in a network. Physical resources of the network are abstracted into virtual resource pools and shared by virtual network entities. A virtual channel is set up for communicating data between a first VNF and a second VNF. A memory pool is allocated for the virtual channel from a set of memory pools. New interfaces are provided for communication between VNFs. The new interfaces may allow to push and pull payloads or data units from one VNF to another. The data may be stored in a queue in the pooled memory allocated for the VNFs/services. Certain processing may be performed before the data is stored in the memory pool.
Method and apparatus for accelerating VM-to-VM network traffic using CPU cache
Methods and apparatus for accelerating VM-to-VM Network Traffic using CPU cache. A virtual queue manager (VQM) manages data that is to be kept in VM-VM shared data buffers in CPU cache. The VQM stores a list of VM-VM allow entries identifying data transfers between VMs that may use VM-VM cache fast-path forwarding. Packets are sent from VMs to the VQM for forwarding to destination VMs. Indicia in the packets (e.g., in a tag or header) is inspected to determine whether a packet is to be forwarded via a VM-VM cache fast path or be forwarded via a virtual switch. The VQM determines the VM data already in the CPU cache domain while concurrently coordinating with the data to and from the external shared memory, and also ensures data coherency between data kept in cache and that which is kept in shared memory.