Patent classifications
H04L12/935
Using completion queues for RDMA event detection
Systems and methods for using completion queues for Remote Direct Memory Access (RDMA) event detection. An example method may comprise: receiving a request to create a queue pair for processing Remote Direct Memory Access (RDMA) requests using an RDMA-enabled network interface controller (RNIC), the queue pair comprising a send queue and a receive queue; associating the queue pair with a completion queue associated with the RNIC, the completion queue employed to store a plurality of completion queue elements associated with completed work requests; receiving a notification of an interrupt associated with the RNIC; and responsive to determining that at least one of a number of send queues associated with the completion queue or a number of receive queues associated with the completion queue exceeds zero, identifying at least one of: a first application registered to be notified of RDMA send events or a second application registered to be notified of RDMA receive events.
SUPPLEMENTAL CONNECTION FABRIC FOR CHASSIS-BASED NETWORK DEVICE
A system may receive, by a switching component of the system, network traffic to be provided to an I/O component of the network device. The system may route, by the switching component, the network traffic to the I/O component based on whether the I/O component is connected to the switching component via the first connections and/or via second connections. The first connections may be connections via a chassis of the system. The second connections may be connections via a connector component that is removable from the switching component. The network traffic may be routed via the first connections and the second connections when the I/O component is connected via the first connections and the second connections. The network traffic may be routed via the first connections and not via the second connections when the I/O component is connected via the first connections and not via the second connections.
POWER AWARE PACKET DISTRIBUTION
Disclosed herein is a computing device configured to implement power aware packet distribution. The computing device includes a central processing unit (CPU) comprising a plurality of cores and an interface controller communicatively coupled to the CPU. The interface controller is configured to receive a data packet to be sent to a targeted core of the plurality of cores and identify a power state of the targeted core. The interface controller is configured to redirect the data packet to an alternate core based on the power state of the targeted core.
ROUTING SYSTEM WITH LEARNING FUNCTIONS AND ROUTING METHOD THEREOF
The present disclosure illustrates a routing system with learning functions and a routing method thereof. By detecting a raw packet's packet header and an entry port receiving the raw packet, a routing message is queried from a path table by the packet header and the entry port. When there is not the routing message, the raw packet is routed by a kernel and the routing result is recorded in the path table to be the routing message. When there is the routing message, the packet header of the raw packet is replaced with a modified packet header recorded in the routing message, to form a modified packet, and the modified packet is transmitted from the transmission port recorded in the routing message, to achieve the technical effect of improving the routing performance for the packets with the same packet headers and the same entry ports.
Method and apparatus for assigning data to split bearers in dual connectivity
A method and an apparatus for assigning data to split bearers in dual connectivity is provided. The apparatus includes a master evolved Node B (MeNB) of a user equipment (UE) configured to receive information of available buffer decided and transmitted by a secondary eNB (SeNB) through an X2 interface between the MeNB and the SeNB, determine whether the information is about available buffer for a UE or for an evolved radio access bearer (E-RAB) established on the SeNB based on an indicator in the information or a bearer that transported the information, and adjust the amount of data assigned to the SeNB according to the information of the available buffer. The apparatus can accommodate eNBs implemented in various manners, make full use of the bandwidth of data bearers, and reduce delay in data transmission.
System and method for supporting credit management for output ports in a networking device
A system and method can support efficient packet switching in a network environment. A networking device, such as a network switch, which includes a crossbar fabric, can be associated with a plurality of input ports and a plurality of output ports. Furthermore, the networking device operates to detect a link state change at an output port on the networking device. The output port can provide one or more credits to an output scheduler, and the output scheduler allows one or more packets targeting the output port to be dequeued from one or more virtual output queues, based on the one or more credits.
System and method for supporting efficient virtual output queue (VOQ) resource utilization in a networking device
A system and method can support packet switching in a network environment. A networking device, such as a network switch, which includes a crossbar fabric, can be associated with a plurality of input ports and a plurality of output ports. Furthermore, the networking device can detect a link state change at an output port that is associated with the networking device. Then, the networking device can notify one or more input ports, via the output port, of the link state change at the output port.
Reverse Forwarding Information Base Enforcement
In exemplary embodiments of the present invention, a router determines whether or not to establish a stateful routing session based on the suitability of one or more candidate return path interfaces. This determination is typically made at the time a first packet for a new session arrives at the router on a given ingress interface. In some cases, the router may be configured to require that the ingress interface be used for the return path of the session, in which case the router may evaluate whether the ingress interface is suitable for the return path and may drop the session if the ingress interface is deemed by the router to be unsuitable for the return path. In other cases, the router may be configured to not require that the ingress interface be used for the return path, in which case the router may evaluate whether at least one interface is suitable for the return path and drop the session if no interface is deemed by the router to be suitable for the return path.
Failover in response to failure of a port
A failure at a first port of the controller node is detected, where the first port is initially assigned a first port identifier and is associated with a logical path through a communications fabric between the first port and a port at a host device. In response to detecting the failure, the first port identifier is assigned to a second port to cause the logical path to be associated with the second port. In response to detecting resolution of the failure, a probe identifier is assigned to the first port. Using the probe identifier, a health of network infrastructure between the first port and the host device is checked. In response to the checking, the first port identifier is assigned to the first port to cause failback of the logical path to the first port.
System and method for supporting efficient virtual output queue (VOQ) packet flushing scheme in a networking device
A system and method can support packet switching in a network environment. The system can include an ingress buffer on a networking device, wherein the ingress buffer, which includes one or more virtual output queues, operate to store one or more incoming packets that are received at an input port on the networking device. Furthermore, the system can include a packet flush engine, which is associated with the ingress buffer, wherein said packet flush engine operates to flush a packet that is stored in a said virtual output queue in the ingress buffer, and notify one or more output schedulers that the packet is flushed, wherein each output scheduler is associated with an output port.