Patent classifications
H04L49/9047
SYSTEM AND METHOD FOR FACILITATING DATA-DRIVEN INTELLIGENT NETWORK WITH INGRESS PORT INJECTION LIMITS
Data-driven intelligent networking systems and methods are provided. The system can accommodate dynamic traffic while applying injection limits to different traffic classes at an ingress edge port. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow can be acknowledged after reaching the egress point of the network, and the acknowledgement packets can be sent back to the ingress point of the flow along the same data path. Furthermore, an edge switch can dynamically allocate the ingress port bandwidth among the traffic classes that are active at a given moment.
SYSTEM AND METHOD FOR FACILITATING DATA-DRIVEN INTELLIGENT NETWORK WITH INGRESS PORT INJECTION LIMITS
Data-driven intelligent networking systems and methods are provided. The system can accommodate dynamic traffic while applying injection limits to different traffic classes at an ingress edge port. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow can be acknowledged after reaching the egress point of the network, and the acknowledgement packets can be sent back to the ingress point of the flow along the same data path. Furthermore, an edge switch can dynamically allocate the ingress port bandwidth among the traffic classes that are active at a given moment.
NON-POSTED WRITE TRANSACTIONS FOR A COMPUTER BUS
Systems and devices can include a controller and a command queue to buffer incoming write requests into the device. The controller can receive, from a client across a link, a non-posted write request (e.g., a deferred memory write (DMWr) request) in a transaction layer packet (TLP) to the command queue; determine that the command queue can accept the DMWr request; identify, from the TLP, a successful completion (SC) message that indicates that the DMWr request was accepted into the command queue; and transmit, to the client across the link, the SC message that indicates that the DMWr request was accepted into the command queue. The controller can receive a second DMWr request in a second TLP; determine that the command queue is full; and transmit a memory request retry status (MRS) message to be transmitted to the client in response to the command queue being full.
System and method for adaptive generic receive offload
An adaptive generic receive offload (A-GRO) system and method are disclosed. In some embodiments, the system comprises a host including a host protocol stack and a host memory, and a network interface card that is communicatively connectable to the host. The A-GRO system is configured to: receive a packet from a network, parse the packet to a header and a payload, classify and map the packet into a particular flow based on contexts associated with a plurality of flows and the header, and move the header and the payload to separate queues associated with the particular flow in the host memory, without holding and stalling the packet in hardware of the NIC. By maintain packet coherence information including header chains, the A-GRO allows the host to skip processing the packets between the first and last headers in a GRO aggregation. The A-GRO system also improves mis-ordering packet handling.
Methods for distributing software-determined global load information
Systems and methods are provided for performing routing in a switch network or fabric. Switches can be configured in a hierarchical topology having a plurality of groups, where switches in a group are connected to one another, and groups are connected to other groups. Routing can be performed by maintaining per-group group load information. A packet can be routed between at least two groups using the per-group group load information to effect a set of routing decisions. The set of routing decisions can be biased towards or away one or more paths.
System and method for facilitating data request management in a network interface controller (NIC)
A network interface controller (NIC) capable of facilitating efficient data request management is provided. The NIC can be equipped with a command queue, a message chopping unit (MCU), and a traffic management logic block. During operation, the command queue can store a command issued via a host interface. The MCU can then determine a type of the command and a length of a response of the command. If the command is a data request, the traffic management logic block can determine whether the length of the response is within a threshold. If the length exceeds the threshold, the traffic management logic block can pace the command such that the response is within the threshold.
System and method for facilitating data request management in a network interface controller (NIC)
A network interface controller (NIC) capable of facilitating efficient data request management is provided. The NIC can be equipped with a command queue, a message chopping unit (MCU), and a traffic management logic block. During operation, the command queue can store a command issued via a host interface. The MCU can then determine a type of the command and a length of a response of the command. If the command is a data request, the traffic management logic block can determine whether the length of the response is within a threshold. If the length exceeds the threshold, the traffic management logic block can pace the command such that the response is within the threshold.
Methods for distributing software-determined global load information
Systems and methods are provided for performing routing in a switch network or fabric. Switches can be configured in a hierarchical topology having a plurality of groups, where switches in a group are connected to one another, and groups are connected to other groups. Routing can be performed by maintaining per-group group load information. A packet can be routed between at least two groups using the per-group group load information to effect a set of routing decisions. The set of routing decisions can be biased towards or away one or more paths.
NON-DISRUPTIVE TRADING OF BUFFERS BETWEEN PORTS OR PORT VIRTUAL LANES OF A CREDITED NETWORK
Techniques for moving buffers between ports, or virtual lanes of a port, of a networking device of a credited network while maintaining the ports in an active state without dropping any frames. The techniques may include determining that a number of buffers are to be reallocated from a first port of a networking device to a second port of the networking device. The techniques may also include causing a peer port connected to the first port to decrement, by the number, a transmit credit counter associated with the peer port. Based at least in part on determining that the peer port decremented the transmit credit counter, the first port may release the number of the buffers from a buffer pool associated with the first port, and the number of the buffers may be reallocated to the second port.
NON-DISRUPTIVE TRADING OF BUFFERS BETWEEN PORTS OR PORT VIRTUAL LANES OF A CREDITED NETWORK
Techniques for moving buffers between ports, or virtual lanes of a port, of a networking device of a credited network while maintaining the ports in an active state without dropping any frames. The techniques may include determining that a number of buffers are to be reallocated from a first port of a networking device to a second port of the networking device. The techniques may also include causing a peer port connected to the first port to decrement, by the number, a transmit credit counter associated with the peer port. Based at least in part on determining that the peer port decremented the transmit credit counter, the first port may release the number of the buffers from a buffer pool associated with the first port, and the number of the buffers may be reallocated to the second port.