Patent classifications
H04L49/1569
SWITCHING AND LOAD BALANCING TECHNIQUES IN A COMMUNICATION NETWORK
A source access network device multicasts copies of a packet to multiple core switches, for switching to a same target access network device. The core switches are selected for the multicast based on a load balancing algorithm managed by a central controller. The target access network device receives at least one of the copies of the packet and generates at least metric indicative of a level of traffic congestion at the core switches and feeds back information regarding the recorded at least one metric to the controller. The controller adjusts the load balancing algorithm based on the fed back information for selection of core switches for a subsequent data flow.
OFFLOAD OF STORAGE NODE SCALE-OUT MANAGEMENT TO A SMART NETWORK INTERFACE CONTROLLER
Examples described herein relate to a network interface that includes an initiator device to determine a storage node associated with an access command based on an association between an address in the command and a storage node. The network interface can include a redirector to update the association based on messages from one or more remote storage nodes. The association can be based on a look-up table associating a namespace identifier with prefix string and object size. In some examples, the access command is compatible with NVMe over Fabrics. The initiator device can determine a remote direct memory access (RDMA) queue-pair (QP) lookup for use to perform the access command.
Methods and apparatuses for non-blocking IP multicast delivery of media data in a multi-spine network
In one illustrative example, an IP network media data router includes a spine and leaf switch architecture operative to provide IP multicast delivery of media data from source devices to receiver devices without the overhead communication with a controller. The architecture can include K spine switches, K sets of L leaf switches, M data links between each leaf switch, and a plurality of bidirectional data ports connected to each leaf switch for a guaranteed non-blocking IP multicast delivery of data. A deterministic hash function a used on both the first hop router and the last hop router to ensure the same spine node is selected for flow stitching. Accordingly, without the extra communication with a centralized controller, the right spine for establishing a multicast flow can be chosen using the deterministic hash function and the distributed resource information stored on each node.
PHYSICAL NETWORK ORCHESTRATION FOR DATA CENTERS
A method is provided in one example embodiment and includes creating a segment organization, which includes a configuration profile. The method also includes attaching the configuration profile to a server in the segment organization. The method further includes sending the attached configuration profile to a database in a physical network.
Clos network load balancing method and apparatus
Embodiments of the present disclosure disclose a Clos network load balancing method and apparatus. In certain embodiments, the method includes receiving, by a first switch, a first packet and determining, by the first switch, a third switch. The third switch is a switch in a second group of switches. The method includes performing, by the first switch, tunnel encapsulation on the first packet, a destination Internet Protocol (IP) address in a tunnel-encapsulated IP header being an IP address of a second switch. The method includes performing, by the first switch, Internet Protocol in Internet Protocol (IP-in-IP) encapsulation on the tunnel-encapsulated first packet and sending, by the first switch, the IP-in-IP encapsulated first packet.
Methods and Apparatuses for Non-Blocking IP Multicast Delivery of Media Data in a Multi-Spine Network
In one illustrative example, an IP network media data router includes a spine and leaf switch architecture operative to provide IP multicast delivery of media data from source devices to receiver devices without the overhead communication with a controller. The architecture can include K spine switches, K sets of L leaf switches, M data links between each leaf switch, and a plurality of bidirectional data ports connected to each leaf switch for a guaranteed non-blocking IP multicast delivery of data. A deterministic hash function a used on both the first hop router and the last hop router to ensure the same spine node is selected for flow stitching. Accordingly, without the extra communication with a centralized controller, the right spine for establishing a multicast flow can be chosen using the deterministic hash function and the distributed resource information stored on each node.
ADDRESS TRANSLATION FOR EXTERNAL NETWORK APPLIANCE
Systems, methods, and computer-readable media relate to providing a network management service. A system is configured to request first network information from a first component of a network using a public IP address for the first component, wherein the first network information includes private IP addresses for a second component in the network and translate, based on a mapping information for a private IP address space to a public IP address space, the private IP address for a second component to a public IP address for the second component. The system is further configured to request second network information from the second component using the public IP address and provide a network management service for the network based on the second network information.
Physical network orchestration for data centers
A method is provided in one example embodiment and includes creating a segment organization, which includes a configuration profile. The method also includes attaching the configuration profile to a server in the segment organization. The method further includes sending the attached configuration profile to a database in a physical network.
Modular switch and a method for scaling switches
A modular switch and a method that includes (a) first tier switching elements that comprise input output (IO) ports; and (b) second tier switching elements that are coupled to the first tier switching elements in a non-blocking manner. The first tier switching elements are configured to perform traffic management of traffic, and perform substantially all egress processing and ingress processing of the traffic; wherein the traffic management comprises load balancing, traffic shaping and flow-based reordering. The second tier switching elements are configured to (a) provide a shared memory space to the first tier switching elements, (b) perform substantially all of the queuing of traffic and (c) send, to the first tier switching elements, status information related to the status of shared memory resources. The first tier switching elements are configured to perform the traffic management based, at least in part, on the status information.
PFC STORM DETECTION AND PROCESSING METHOD
The present disclosure relates to priority flow control (PFC) storm detection and processing methods. In one example method, a first network node performs PFC detection on a first port queue of a first port, and determines that a first preset condition is met. The first preset condition includes: detection is performed in N consecutive first time segments, when the detection is performed in each first time segment, a quantity of first PFC frames sent by the first port queue to a second network node is greater than a first threshold, and a quantity of one or more data packets received by the first port queue from the second network node is less than a second threshold. The first PFC frame is used to indicate the second network node to suspend sending all data flows in the first port queue, and N is a positive integer.