Patent classifications
H04L12/863
NIC PRIORITY QUEUE STEERING AND PROCESSOR UNIT FREQUENCY TUNING BASED ON PACKET FLOW ANALYTICS
In one embodiment, a system comprising a network interface controller comprising circuitry to determine per-flow analytics information for a plurality of packet flows; and facilitate differential rate processing of a plurality of packet queues for the plurality of packet flows based on the per-flow analytics information.
Low latency compact Clos network controller
Many network protocols, including certain Ethernet protocols, include specifications for multiplexing using of virtual lanes. Due to skews and/or other uncertainties associated with the process, packets from virtual lanes may arrive at the receiver out of order. The present disclosure discusses implementations of receivers that may use multiplexer based crossbars, such as Clos networks, to reorder the lanes. State-based controllers for the Clos networks and state-based methods to assign routes in are also discussed.
Methods and apparatus for sharing and arbitration of host stack information with user space communication stacks
Methods and apparatus for efficient data transfer within a user space network stack. Unlike prior art monolithic networking stacks, the exemplary networking stack architecture described hereinafter includes various components that span multiple domains (both in-kernel, and non-kernel). For example, unlike traditional “socket” based communication, disclosed embodiments can transfer data directly between the kernel and user space domains. Direct transfer reduces the per-byte and per-packet costs relative to socket based communication. A user space networking stack is disclosed that enables extensible, cross-platform-capable, user space control of the networking protocol stack functionality. The user space networking stack facilitates tighter integration between the protocol layers (including TLS) and the application or daemon. Exemplary systems can support multiple networking protocol stack instances (including an in-kernel traditional network stack).
Optimization of data queue priority for reducing network data load speeds
There are provided systems and methods for optimization of data queue priority for reducing network data load speeds. A user may utilize a communication device to access an online resource and request data, such as server data from an online server. The online resource may determine a user profile associated with the user and/or device, which may include previous online actions and completion information for electronic transaction processing with one or more online entities. Using the profile, the server may optimize a data queue for data delivery to multiple devices depending on the devices' data requests and priority. The server may deliver data to the devices based on the data queue. The server may also update the user profile based on additional device actions with the server. These techniques may be particularly useful for prioritizing requests during peak server resource usage.
Relay device
A relay device includes: multiple ports; a queue for each port storing a transmission scheduled frame and having a variable storage capacity with a minimum guarantee value; a shared storage area having a predetermined storage capacity for each queue; and a storage controller controlling to store the transmission scheduled frame in each queue. The storage controller stores the transmission scheduled frame in a storage destination queue when a usage storage capacity of the storage destination queue does not exceed the minimum guarantee value. The storage controller uses a free area as the storage destination queue and stores the transmission scheduled frame in the storage destination queue when the shared storage area has the free storage area for storing the transmission scheduled frame and the usage storage capacity of the storage destination queue exceeds the minimum guarantee value.
Congestion flow identification method and network device
The present disclosure relates to congestion flow identification methods. One example method includes obtaining, by a network device, a queue length of a non-congestion flow queue, where the non-congestion flow queue includes a data packet or description information of the data packet, determining, by the network device, a target output port of a target data packet when the length of the non-congestion flow queue is greater than or equal to a first threshold, where the target data packet is a data packet waiting to enter the non-congestion flow queue or a next data packet waiting to be output from the non-congestion flow queue, and when utilization of the target output port is greater than or equal to a second threshold, determining, by the network device, that a flow corresponding to the target data packet is a congestion flow.
Protocol layer tunneling for a data processing system
The present disclosure advantageously provides a system and method for protocol layer tunneling for a data processing system. A system includes an interconnect, a request node coupled to the interconnect, and a home node coupled to the interconnect. The request node includes a request node processor, and the home node includes a home node processor. The request node processor is configured to send, to the home node, a sequence of dynamic requests, receive a sequence of retry requests associated with the sequence of dynamic requests, and send a sequence of static requests associated with the sequence of dynamic requests in response to receiving credit grants from the home node. The home node processor is configured to send the sequence of retry requests in response to receiving the sequence of dynamic requests, determine the credit grants, and send the credit grants.
REDUCED-COMPLEXITY INTEGRATED GUARANTEED-RATE OPTICAL PACKET SWITCH
A reduced-complexity optical packet switch which can provide a deterministic guaranteed rate of service to individual traffic flows is described. The switch contains N input ports, M output ports and N*M Virtual Output Queues (VOQs). Packets are associated with a flow f, which arrive an input port and depart on an output port, according to a predetermined routing for the flow. These packets are buffered in a VOQ. The switch can be configured to store several deterministic periodic schedules, which can be managed by an SDN control-plane. A scheduling frame is defined as a set of F consecutive time-slots, where data can be transmitted over connections between input ports and output ports in each time-slot. Each input port can be assigned a first deterministic periodic transmission schedule, which determines which VOQ is selected to transmit, for every time-slot in the scheduling frame. Each input port can be assigned a second deterministic periodic schedule, which determines which traffic flow within a VOQ is selected to transmit. Each input port can be assigned a third deterministic periodic schedule, which specifies to which VOQ an arriving packet (if any) is destined, for each time-slot in a scheduling frame. Each input port can be assigned a fourth deterministic periodic schedule, which specifies to which Flow-VOQ within a VOQ an arriving packet (if any) is destined. In this manner, each traffic flow can receive a deterministic guaranteed-rate of transmission through the switch.
PACKET TRANSFER APPARATUS, METHOD, AND PROGRAM
Provided is a packet transfer apparatus configured to per form packet exchange processing for exchanging multiple continuous packets with low delay while maintaining fairness between communication flows of the same priority level. A packet transfer apparatus 100 includes: a packet classification unit 120; queues 130 that holds the classified packets for each classification; and a dequeue processing unit 140 that extracts packets from the queues 130. The dequeue processing unit 140 includes a scheduling unit 141 that controls the packet extraction amount extracted from the queue 130 for a specific communication flow based on information on the amount of data that is requested by the communication flow and is to be continuously transmitted in packets.
VIRTUAL NETWORK DEVICE
A virtual network device increases the effective number of local physical ports by converting each of the local physical ports into a plurality of virtual local physical ports, and the effective number of network physical ports by converting each of the network physical ports into a plurality of virtual network physical ports.