Patent classifications
H04L49/1515
Data center network with multiplexed communication of data packets across servers
A network system for a data center is described in which a switch fabric provides interconnectivity such that any servers may communicate packet data to any other of the servers using any of a number of parallel data paths. Moreover, according to the techniques described herein, edge-positioned access nodes, permutation devices and core switches of the switch fabric may be configured and arranged in a way such that the parallel data paths provide single L2/L3 hop, full mesh interconnections between any pairwise combination of the access nodes, even in massive data centers having tens of thousands of servers. The access nodes may be arranged within access node groups, and permutation devices may be used within the access node groups to spray packets across the access node groups prior to injection within the switch fabric, thereby increasing the fanout and scalability of the network system.
Data center network with multiplexed communication of data packets across servers
A network system for a data center is described in which a switch fabric provides interconnectivity such that any servers may communicate packet data to any other of the servers using any of a number of parallel data paths. Moreover, according to the techniques described herein, edge-positioned access nodes, permutation devices and core switches of the switch fabric may be configured and arranged in a way such that the parallel data paths provide single L2/L3 hop, full mesh interconnections between any pairwise combination of the access nodes, even in massive data centers having tens of thousands of servers. The access nodes may be arranged within access node groups, and permutation devices may be used within the access node groups to spray packets across the access node groups prior to injection within the switch fabric, thereby increasing the fanout and scalability of the network system.
Managed midlay layers on a routed network
Techniques for providing a non-blocking fabric in a network are described. A network controller determines the network requirement for various network traffic types on the network and determines the allocation of resources across the network needed to establish a midlay, including midlay components on the network. The network controller then establishes the midlay on the network according to the determined allocation. At least one of the midlay components is a virtually non-blocking fabric for high-priority traffic or fully non-blocking fabric for deterministic traffic.
METHOD, NODE, AND SYSTEM FOR TRAFFIC TRANSMISSION
A method is applied to a ring link, where the ring link includes a first node, a second node, a third node, and a fourth node in sequence. According to the method, the first node receives first traffic, where the first node is a source node that sends the first traffic on the ring link; and the first node sends the first traffic to the third node, where two reachable paths with equal hop counts are included from the first node to the third node, the first node sends the first traffic to the third node on a preset first transmission path, the first transmission path passes through the second node, and the first transmission path is one of the two reachable paths with equal hop counts. This method can reduce computing load of nodes while implementing non-blocking switching of traffic between the nodes.
Method, Device, and Network System for Load Balancing
A method for implementing load balancing are applied to a 4-node network structure. Every two nodes in the 4-node network structure are interconnected, and the nodes are, e.g., dies. The 4-node network structure includes a source node (SN) and a destination node (DN). According to the method, when a bandwidth occupied by ingress traffic flowing into the SN and destined for the DN is greater than a bandwidth of a fabric side link (FSL) between the SN and the DN, the SN selects at least two transmission paths to send the ingress traffic to the DN; and when the bandwidth occupied by the ingress traffic is less than or equal to the bandwidth of the FSL, the SN transmits the ingress traffic on a direct link between the SN and the DN.
INTERIOR GATEWAY PROTOCOL FLOODING OPTIMIZATION METHOD AND DEVICE, AND STORAGE MEDIUM
Provided are an interior gateway protocol flooding optimization method and device, and a storage medium. The method for optimizing flooding of an internal gateway protocol comprises: flooding, by a first node, a first packet carrying link state data and first record information to at least one neighboring node, wherein the first record information comprises indication information of nodes that the link state data has passed through. In this embodiment, by carrying indication information of a node that link state data passes through, it is convenient for the node to reduce redundant sending of link state information according to the information, thereby accelerating convergence speed.
Data communication method and apparatus
The invention provides a data communication method, including: sending, by the first electrical node, request information to an electrical node, where the request information is used to request an expected data volume quota of a first VOQ, and the first VOQ stores at least one first data packet to be sent to the electrical node; receiving response information, where the response information includes a target data volume quota; and sending the at least one first data packet to the electrical node via the at least one optical node based on the target data volume quota.
Methods and apparatus related to virtualization of data center resources
In one embodiment, an apparatus includes a switch core that has a multi-stage switch fabric. A first set of peripheral processing devices coupled to the multi-stage switch fabric by a set of connections that have a protocol. Each peripheral processing device from the first set of peripheral processing devices is a storage node that has virtualized resources. The virtualized resources of the first set of peripheral processing devices collectively define a virtual storage resource interconnected by the switch core. A second set of peripheral processing devices coupled to the multi-stage switch fabric by a set of connections that have the protocol. Each peripheral processing device from the first set of peripheral processing devices is a compute node that has virtualized resources. The virtualized resources of the second set of peripheral processing devices collectively define a virtual compute resource interconnected by the switch core.
Automatic network assembly
Some examples provide a method for automatic network assembly. The following instructions may be used to implement automatic network assembly in a modular infrastructure. Instructions to automatically connect a management port to a management network. Instructions to automatically connect link ports to form a scalable ring. Instructions to automatically connect each modular infrastructure management device to a bay management network port.
Host routed overlay with deterministic host learning and localized integrated routing and bridging
Systems, methods, and devices for improved routing operations in a network computing environment. A system includes a virtual customer edge router and a host routed overlay comprising a plurality of host virtual machines. The system includes a routed uplink from the virtual customer edge router to one or more of the plurality of leaf nodes. The system is such that the virtual customer edge router is configured to provide localized integrated routing and bridging (TRB) service for the plurality of host virtual machines of the host routed overlay.