Network interconnect as a switch
11671330 · 2023-06-06
Assignee
Inventors
Cpc classification
H04L12/4625
ELECTRICITY
H04L49/1507
ELECTRICITY
H04L45/122
ELECTRICITY
H04Q11/0071
ELECTRICITY
International classification
H04L45/122
ELECTRICITY
Abstract
An interconnect as a switch module (“ICAS” module) comprising n port groups, each port group comprising n-1 interfaces, and an interconnecting network implementing a full mesh topology where each port group comprising a plurality of interfaces each connects an interface of one of the other port groups, respectively. The ICAS module may be optically or electrically implemented. According to the embodiments, the ICAS module may be used to construct a stackable switching device and a multi-unit switching device, to replace a data center fabric switch, and to build a new, high-efficient, and cost-effective data center.
Claims
1. A data center network, comprising: a plurality of network pods, a plurality of first spine planes, and a plurality of second spine planes; wherein each of the plurality of network pods comprises a first ICAS module and a plurality of first layer switches; wherein either the plurality of first spine planes or the plurality of second spine planes is deployed as a data center design option; wherein the first ICAS module comprises: n port groups, each port group comprising n-1 interfaces, wherein n is an integer equal or larger than 3; an interconnecting network implementing a full mesh topology, wherein each of the n port groups connects one of the n-1 interfaces to another of the n port groups statically, respectively; wherein the n port groups are indexed with an integer from 0 to n-1; wherein the n-1 interfaces of the n port groups are labeled with the same indexes as those of connected n port groups; wherein an interface with index j of one of the n port groups with index i is connected to an interface with index i of one of the n port groups with index j, where i is in the range of 0 to n-1, j is in the range 0 to n-1, wherein i does not equal to j, and wherein the interconnecting network comprises all connections between the n port groups; interfaces of each of the plurality of first layer switches are configured to be grouped into n port groups one of intralink and interlink respectively; wherein a number of the plurality of first layer switches is n, and wherein the plurality of first layer switches is indexed with an integer from 0 to n-1; wherein the plurality of first spine planes and network pods are interconnected through interlinks; wherein interlinks k of the network pods each connects one of the interlinks of kth of the plurality of first spine plane, respectively, and wherein the interlinks p of the plurality of first spine planes each connects one of the interlinks of the pth network pod, respectively; a plurality of downlinks to receive and transmit data signals to and from a plurality of servers; wherein the plurality of second spine planes and the plurality of network pods are interconnected through interlinks; wherein interlinks k of the network pods each connects one of the interlinks of the kth second spine plane, respectively, and wherein the interlinks p of the second spine planes each connects one of the interlinks of the pth network pod, respectively; and wherein a plurality of downlinks to receive and transmit data signals to and from a plurality of servers.
2. The data center network of claim 1, wherein the network pod further comprises: a plurality of first layer switches whose interfaces are divided into downlink interfaces, interlink interfaces and intralink interfaces, wherein the downlink interfaces are configured to receive and transmit data signals to and from a plurality of servers, wherein the interlink interfaces of each of the plurality of first layer switches are configured into n port group of interlink, wherein the intralink interfaces of each of the plurality of first layer switches is configured into n port group of intralinks.
3. The data center network of claim 1, wherein interfaces of second layer devices of the first ICAS module are divided into intralink interfaces and uplink interfaces, wherein the intralink interfaces of the first ICAS module are grouped into the n port groups to connect to the intralink interfaces of the corresponding n port groups of the plurality of first layer switches, and wherein the uplink interfaces are configured to connect to an external network.
4. The data center network of claim 1, wherein each of the plurality of the first spine planes comprises a fanout cable transpose rack, wherein the fanout cable transpose rack comprises: k groups of first fiber adapters, each adapter of the k groups of the first fiber adapters comprising m interfaces, wherein the k groups of the first fiber adapters connect to corresponding ones of k switches through k groups of first fiber cables, wherein the k groups of the first fiber adapters also connect to an fiber adapter mounting panel by k groups of first fanout fiber cables, wherein each of a group of ┌p/m┐ first fiber adapters connect to a corresponding group of ┌p/m┐ fiber adapters of each switch by ┌p/m┐ first fiber cables, wherein each of the group of ┌p/m┐ first fiber adapters connect to the fiber adapter mounting panel by a group of ┌p/m┐ first fanout fiber cables, wherein ┌┐ is a ceiling function; and p groups of second fiber adapters, each adapter of p groups of the second fiber adapters comprising m interfaces, wherein the p groups of the second fiber adapters connect p groups of second fiber cables to form p groups of interlinks, wherein the p groups of the second fiber adapters also connect to the fiber adapter mounting panel by p groups of second fanout fiber cables, wherein each of a group of ┌k/m┐ second fiber adapters connects to a group of ┌k/m┐ second fiber cables to form an interlink, wherein each of the groups of the ┌k/m┐ second fiber adapters connects to the fiber adapter mounting panel by a group of ┌k/m┐ second fanout fiber cables, wherein ┌┐ is a ceiling function; wherein the fiber adapter mounting panel, the k groups of the first fanout fiber cables and the p groups of the second fanout fiber cables are cross-connected on the fiber adapter mounting panel, through cross-connection, wherein connections from the k switches are grouped into p interlinks, each interlink containing one connection from each of the k switches, with a total of k connections per interlink; and a plurality of third layer switches.
5. The data center network of claim 4, wherein the fanout cable transpose rack connects to the plurality of third layer switching devices through a plurality of fiber cables; and wherein the fanout cable transpose rack comprises a plurality of interlinks in the fanout cable transpose rack to connect to a plurality of network pods; wherein connections from the plurality of third layer switching devices are grouped into the plurality of interlinks in the fanout cable transpose rack through the fanout cable transpose rack.
6. The data center network of claim 5, wherein each of the plurality of interlinks in the fanout cable transpose rack contains one connection from one of the third layer switching devices, with each interlink in the fanout cable transpose rack having a number of connections, wherein the number equals to a number of the third layer switching devices.
7. The data center network of claim 4, wherein the kth spine plane interconnects the plurality of first layer switches of each of the network pods through the kth interlinks working under a full mesh connection has a characteristics from the network with an (n-1, n) bipartite graph, wherein the network with (n-1, n) bipartite graph comprises the predetermined number of the plurality of first layer switches and the predetermined number minus one n-1 of the third layer spine switches, and wherein, the plurality of first layer switches and the third layer spine switches interconnect in a Clos topology.
8. The data center network of claim 1, wherein each of the plurality of second spine planes further comprise a second ICAS module; wherein the second ICAS module comprises: n port groups, each port group comprising n-1 interfaces, wherein n is an integer equal or larger than 3; an interconnecting network implementing a full mesh topology, wherein each of the n port groups connects one of the n-1 interfaces to another of the n port groups statically, respectively; wherein the n port groups are indexed with an integer from 0 to n-1; and wherein the n-1 interfaces of the n port groups are labeled with the same indexes as those of connected n port groups; wherein an interface with index j of one of the n port groups with index i is connected to an interface with index i of one of the n port groups with index j, where i is in the range of 0 to n-1, j is in the range 0 to n-1, wherein i does not equal to j, and wherein the interconnecting network comprises all connections between the n port groups.
9. The data center network of claim 8, wherein the second ICAS modules comprises interfaces divided into interlink interfaces and uplink interfaces, wherein the interlink interfaces of the second ICAS modules is grouped into a plurality of n port groups, wherein each of the plurality of n port groups of the second ICAS modules connects to the corresponding one of the n port groups of the first layer switches of each of the plurality of the network pods; and wherein the uplink interfaces are configured to connect to an external network.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27) To facilitate cross-referencing among the figures and to simplify the detailed description, like elements are assigned like reference numerals.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
(28) The present invention simplifies the network architecture by eliminating the switches in the fabric layer based on a new fabric topology, referred herein as the “interconnect-as-a-switch” (ICAS) topology. The ICAS topology of the present invention is based on the “full mesh” topology. In a full mesh topology, each node is connected to all other nodes. The example of a 9-node full mesh topology is illustrated in
(29) As discussed in further detail below, the ICAS topology enables a data center network that is far superior to a network of the fat tree topology used in prior art data center networks. Unlike other network topologies, the ICAS topology imposes a structure on the network which reduces congestion in a large extent. According to one embodiment, the present invention provides an ICAS module as a component for interconnecting communicating devices.
(30)
(31)
(32) The internal interconnection between the port groups of the ICAS module can be realized via an optical media to achieve a full mesh structure. The optical media may be an optical fiber and/or 3D MEMS. The 3D MEMS uses a controllable micro-mirror to create an optical path to achieve a full mesh structure. In both of these implementations MPO connectors are used. Alternatively, the ICAS module may also be electrically implemented using circuits. In this manner, the port groups of the ICAS module are soldered or crimped onto a PCB using connectors that support high-speed differential signals and impedance matching. The interconnection between the port groups is implemented using a copper differential pair on the PCB. Since signal losses significantly vary between different grades of high-speed differential connectors and between copper differential pairs on different grades of PCBs, an active chip is usually added at the back end of the connector to restore and enhance the signal to increase the signal transmission distance on the PCB. Housing the ICAS module in a 1U to multi-U rackmount chassis will form a 1U to multi-U interconnection device. The ICAS-based interconnection devices are then interconnected with switching devices to form a full mesh non-blocking network. This novel network will be explained in detail hereunder in a plurality of embodiments. When the ICAS module of the 1U to multi-U interconnection device is optically implemented (based on optical fiber and 3D MEMS), MPO-MPO cables are used to connect the ICAS-based interconnection devices and the switching devices. When the ICAS module of the 1U to multi-U interconnection device is electrically implemented as circuits (based on PCB+chip), DAC direct connection cables or AOC active optical cables are used to connect the ICAS-based interconnection devices and the switching devices.
(33) As switching in ICAS module 400 is achieved passively by its connectivity, no power is dissipated in performing the switching function. Typical port group-to-port group delay through an ICAS passive switch is around 10 ns (e.g., 5 ns/meter, for an optical fiber), making it very desirable for a data center application, or for big data, AI and HPC environments.
(34) The indexing scheme of external-to-internal connectivity in ICAS module 400 of
(35) TABLE-US-00002 TABLE 2 Index of External Interface ICAS Port Group 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 8 1 0 2 3 4 5 6 7 8 2 0 1 3 4 5 6 7 8 3 0 1 2 4 5 6 7 8 4 0 1 2 3 5 6 7 8 5 0 1 2 3 4 6 7 8 6 0 1 2 3 4 5 7 8 7 0 1 2 3 4 5 6 8 8 0 1 2 3 4 5 6 7
(36)
(37) As illustrated in
(38) In full mesh topology network 500, the interfaces of each TOR switch is regrouped into port groups, such that each port group contains 8 interfaces. To illustrate this arrangement, port group 2 from each TOR switch connects to ICAS module 510. As each TOR switch has a dedicated path through ICAS module 510 to each of the other TOR switches, no congestion can result from two or more flows from different source switches being routed to the same port of destination switch (the “Single-Destination-Multiple-Source Traffic Aggregation” case). In that case, for example, when TOR switches 51-0 to 51-8 each have a 10-G data flow that has TOR switch 51-0 as destination, all the flows would be routed on paths through respective interfaces. Table 3 summarizes the separate designated paths:
(39) TABLE-US-00003 TABLE 3 ICAS ICAS Source Destination Source Internal Internal Destination T1.p2.c0 .Math. ICAS2.pl.c0 .Math. ICAS2.p0.c1 .Math. T0.p2.c0 T2.p2.c0 .Math. ICAS2.p2.c0 .Math. ICAS2.p0.c2 .Math. T0.p2.c1 T3.p2.c0 .Math. ICAS2.p3.c0 .Math. ICAS2.p0.c3 .Math. T0.p2.c2 T4.p2.c0 .Math. ICAS2.p4.c0 .Math. ICAS2.p0.c4 .Math. T0.p2.c3 T5.p2.c0 .Math. ICAS2.p5.c0 .Math. ICAS2.p0.c5 .Math. T0.p2.c4 T6.p2.c0 .Math. ICAS2.p6.c0 .Math. ICAS2.p0.c6 .Math. T0.p2.c5 T7.p2.c0 .Math. ICAS2.p7.c0 .Math. ICAS2.p0.c7 .Math. T0.p2.c6 T8.p2.c0 .Math. ICAS2.p8.c0 .Math. ICAS2.p0.c8 .Math. T0.p2.c7
(40) In other words, in Table 3, the single-connection data between first layer switch i connected to the port group with index i and first layer switch j connected to the port group with index j is directly transmitted through the interface with index j of the port group with index i and the interface with index i of the port group with index j.
(41) In Table 3 (as well as in all Tables herein), the switch source and the switch destination are each specified by 3 values: Ti.p.sub.j.c.sub.k, where T.sub.i is the TOR switch with index i, p.sub.j is the port group with index j and c.sub.k is the interface with index k. Likewise, the source interface and destination interface in ICAS module 500 are also each specified by 3 values: ICASj.p.sub.i.c.sub.k, where ICASj is the ICAS module with index j, p.sub.i is the port group with index i and c.sub.k is the internal or external interface with index k.
(42) An ICAS-based network is customarily allocated so that when its port groups are connected to port group i from all TOR switches the ICAS will be labeled as ICASi with index i.
(43) Congestion can also be avoided in full mesh topology network 500 with a suitable routing method, even when a source switch receives a large burst of aggregated data (e.g., 80 Gbits per second) from all its connected servers to be routed to the same destination switch (the “Port-to-Port Traffic Aggregation” case). In this case, it is helpful to imagine the TOR switches as consisting of two groups: the source switch i and the rest of the switches 0 to i−1, i+1 to 8. The rest of the switches are herein collectively referred to as the “fabric group”. Suppose TOR switch 51-1 receives 80 Gbits per second (e.g., 8 10G flows) from all its connected servers all designating to destination TOR switch 51-0. The routing method for the Port-to-Port Traffic Aggregation case allocates the aggregated traffic to its 8 10G interfaces with port group 51-1 as in
(44) TABLE-US-00004 TABLE 4A ICAS ICAS Source Destination Source Internal Internal Destination T1.p2.c0 .Math. ICAS2.p1.c0 .Math. ICAS2.p0.c1 .Math. T0.p2.c0 T1.p2.c1 .Math. ICAS2.p1.c2 .Math. ICAS2.p2.c1 .Math. T2.p2.c1 T1.p2.c2 .Math. ICAS2.p1.c3 .Math. ICAS2.p3.c1 .Math. T3.p2.c1 T1.p2.c3 .Math. ICAS2.p1.c4 .Math. ICAS2.p4.c1 .Math. T4.p2.c1 T1.p2.c4 .Math. ICAS2.p1.c5 .Math. ICAS2.p5.c1 .Math. T5.p2.c1 T1.p2.c5 .Math. ICAS2.p1.c6 .Math. ICAS2.p6.c1 .Math. T6.p2.c1 T1.p2.c6 .Math. ICAS2.p1.c7 .Math. ICAS2.p7.c1 .Math. T7.p2.c1 T1.p2.c7 .Math. ICAS2.p1.c8 .Math. ICAS2.p8.c1 .Math. T8.p2.c1
(45) Note that the data routed to TOR switch 51-0 has arrived at its designation and therefore would not be routed further. Each TOR switch in the fabric group, other than TOR switch 51-0, then allocates its interface 0 for forwarding its received data to TOR switch 51-0 (Table 4B):
(46) TABLE-US-00005 TABLE 4B ICAS ICAS Source Destination Source Internal Internal Destination — .Math. — .Math. — .Math. — T2.p2.c0 .Math. ICAS2.p2.c0 .Math. ICAS2.p0.c2 .Math. T0.p2.c1 T3.p2.c0 .Math. ICAS2.p3.c0 .Math. ICAS2.p0.c3 .Math. T0.p2.c2 T4.p2.c0 .Math. ICAS2.p4.c0 .Math. ICAS2.p0.c4 .Math. T0.p2.c3 T5.p2.c0 .Math. ICAS2.p5.c0 .Math. ICAS2.p0.c5 .Math. T0.p2.c4 T6.p2.c0 .Math. ICAS2.p6.c0 .Math. ICAS2.p0.c6 .Math. T0.p2.c5 T7.p2.c0 .Math. ICAS2.p7.c0 .Math. ICAS2.p0.c7 .Math. T0.p2.c6 T8.p2.c0 .Math. ICAS2.p8.c0 .Math. ICAS2.p0.c8 .Math. T0.p2.c7
(47) In other words, at least one multi-connection data between the first layer switch i connected to the port group indexed i and the first layer switch j connected to the port group indexed j is transmitted through the first layer switches connected to at least one of the port groups other than the port group with source index. The multi-connection data arriving at the destination switch will cease to be further routed and transmitted.
(48) To put it more precisely, the multi-connection data transmission occurring between first layer switch i connected to the port group with index i and first layer switch j connected to the port group with index j includes the transmissions includes: as in Table 4A, the first layer switch i is connected, via a plurality of interfaces of the port group with a plurality of index i, to a plurality of first layer switches with a plurality of corresponding indexes for transmission; as in Table 4B, a plurality of the first layer switches with the indexes as shown are connected, via interfaces with index j of the port groups, to the interfaces with the indexes as shown of the port groups with index j of the first layer switches for transmission; those transmissions that arrive at a destination switch will stop routing.
(49) Thus, the full mesh topology network of the present invention provides performance that is in stark contrast to prior art network topologies (e.g., fat tree), in which congestions in the fabric switch cannot be avoided under Single-Destination-Multiple-Source Traffic Aggregation and Port-to-Port Traffic Aggregation cases.
(50) Also, as discussed above, when TOR switches 51-0 to 51-8 abide by the rule m≥2n-2, where m is the number of network-side interfaces (e.g., the interfaces with a port group in ICAS module 500) and n is the number of the TOR switch's input interfaces (e.g., interfaces to the servers within the data center), a strict blocking condition is avoided. In other words, a static path is available between any pair of input interfaces under any traffic condition. Avoiding such a blocking condition is essential in a circuit-switched network, but is not necessarily significant in a flow-based switched network.
(51) In the full mesh topology network 500 of
(52) In full mesh topology network 500, uniform traffic may be spread out to the fabric group and then forwarded to its destination. In network 620 of
(53) The inventor of the present invention investigated in detail the similarities and the differences between the full mesh topology of the present invention and other network topologies, such as the fat tree topology in the data center network of
(54) The inventor discovered that an n-node full mesh graph is embedded in a fabric-leaf network represented by a bipartite graph with (n-1, n) nodes (i.e., a network with n-1 fabric nodes and n TOR switch leaves).
(55) This discovery leads to the following rather profound results: (a) An n-node full mesh graph is embedded in an (n-1, n)-bipartite graph; and the (n-1, n) bipartite graph and the data center Fabric/TOR topology have similar connectivity characteristics; (b) A network in the (n-1, n) Fabric/TOR topology (i.e., with n-1 fabric switches and n TOR switches) can operate in same connectivity characteristics as a network with full mesh topology (e.g., network 500 of
(56) In the following, a data center network that incorporates ICAS modules in place of fabric switches may be referred to as an “ICAS-based” data center network. An ICAS-based data center network has the following advantages: (a) less costly, as fabric switches are not used; (b) lower power consumption, as ICAS modules are passive; (c) less congestion; (d) lower latency; (e) effectively less network layers (2 hops less for inter-pod traffic; 1 hop less for intra-pod traffic); (f) greater scalability as a data center network.
(57) These results may be advantageously used to improve typical state-of-the-art data center networks.
(58)
(59) Details of a spine plane of
(60) Details of a server pod of
(61) The data traffic through the fabric switches is primarily limited to intra-pod. The TOR switches now route both the intra-pod traffic as well as inter-pod traffic and are more complex. The independent link types achieve massive scalability in data center network implementations. (Additional independent links provided from higher radix switching ASIC may be created to achieve larger scale of connectivity objectives). Additionally, data center network 800 incorporates the full mesh topology concept (without physically incorporating an ICAS module) to remove redundant network devices and allow the use of innovative switching methods, in order to achieve a “lean and mean” data center fabric with improved data traffic characteristics.
(62) As shown in
(63) As the network in the intra-pod region of each server pod can operate in the same connectivity characteristics as a full mesh topology network, all the 20 fabric switches of the server pod may be replaced by an ICAS module. ICAS-based data center network 900, resulting from substituting fabric switches 83-0 to 83-19 of data center network 800, is shown in
(64)
(65) Details of a spine plane of
(66) That is, on one side of the fanout cable transpose rack 921 is k first port groups 923, each first port group has ┌p/m┐ of first MPO adapters, where ┌┐ is a ceiling function, each port groups connects to a corresponding port group of a spine switch through the ┌p/m┐ first MPO-MPO cables. On the other side of the fanout cable transpose rack 921 is p second port groups 924, each second port group has ┌k/m┐ of second MPO adapters, where ┌┐ is an ceiling function, each port group connects to 5 second MPO-MPO cables to form an interlink to the ICAS pod.
(67) As pointed out earlier in this detailed description, the state-of-the-art data centers and switch silicon are designed with 4 interfaces (TX, RX) at 10 Gb/s or 25 Gb/s each per port in mind. Switching devices are interconnected at the connection level in ICAS-based data center. In such a configuration, a QSFP cable coming out from a QSFP transceiver is separated into 4 interfaces, and 4 interfaces from different QSFP transceivers are combined in a QSFP cable for connecting to another QSFP transceiver. Also, a spine plane may interconnect a large and varying number of ICAS pods (e.g., in the hundreds) because of the scalability of an ICAS-based data center network. Such a cabling scheme is more suitable to be organized in a fanout cable transpose rack (e.g., fanout cable transpose rack 921), which may be one or multiple racks and be integrated into the spine planes. Specifically, the spine switches and the TOR switches may each connect to the fanout cable transpose rack with QSFP straight cables. Such an arrangement simplifies the cabling in a data center.
(68) In the embodiment shown in
(69) Details of an ICAS pod of
(70) The data traffic through the ICAS module is primarily limited to intra-pod. The TOR switches now perform routing for the intra-pod traffic as well as inter-pod traffic and are more complex. The independent link types achieve massive scalability in data center network implementations. (Additional independent link provided from higher radix switching ASIC may be created to achieve a larger scale of connectivity objectives).
(71) As shown in
(72) Together, the ICAS pods and the spine planes form a modular network topology capable of accommodating hundreds of thousands of 10G-connected servers, scaling to multi-petabit bisection bandwidth, and covering a data center with congestion improved and non-oversubscribed rack-to-rack performance.
(73) According to one embodiment of the present invention, a spine switch can be implemented using a high-radix (e.g., 240×10G) single chip switching device, as shown in
(74) To overcome the limitation on the port count of the silicon chip, one or more 1U to multi-U rackmount chassis each packaged with one or more ICAS modules, and a plurality of 1U rackmount chassis each packaged with one or more switching devices, can be stacked up in one or more racks, interconnected, to form a higher-radix (i.e. high network port count) stackable spine switching device (e.g., ICAS-based stackable switching device). Each ICAS module is connected to the plurality of switching devices, such that the ICAS module interconnects at least some interfaces of at least some port groups of different switching devices to form a full mesh non-blocking interconnection. The interfaces of the rest of the at least some port groups for interconnecting different switching devices are configured as an uplink. When the ICAS-module-based 1U to multi-U rackmount chassis are optically implemented (based on optical fiber and 3D MEMS), MPO-MPO cables may be used to connect the ICAS-based interconnection devices and the switching devices. When the ICAS-module-based 1U to multi-U rackmount chassis are electrically implemented as circuits (based on PCB+chip), DAC direct connection cables or AOC active optical cables may be used to connect the ICAS-based interconnection devices and the switching devices.
(75) Details of an ICAS-based stackable switching device 950 are shown in
(76) ICAS-based stackable switching device has the benefits of improved network congestion, saving the costs, power consumption and space savings than the switching devices implemented in the state of the art data center. As shown in the “ ICAS+Stackable Chassis” column of Table 5, data center with ICAS and ICAS-based stackable switching device performs remarkably on data center network with total switching ASIC saving by 53.5%, total power consumption saving by 26.0%, total space saving by 25.6% and much improved network congestion. However total QSFP transceiver usage is increased by 2.3%.
(77) The above stackable switching device is for illustrative purpose. A person experienced in the art can easily expand the scalability of the stackable switching device and should not be limited as in the illustration.
(78) The stackable switching device addresses the insufficiency in the number of ports of network switching chip, thus making possible a flexible network configuration. However, a considerable number of connecting cables and conversion modules have to be used to interconnect the ICAS-based interconnection devices and the switching devices. To further reduce the use of cables and conversion modules, ICAS modules and switch chips can be electronically interconnected using a PCB and connectors, which is exactly how the multi-unit switching device is structured. Specifically, the ICAS module of the ICAS-based multi-unit switching device is electrically implemented as circuits, and the port groups of the ICAS module are soldered or crimped onto a PCB using connectors that support high-speed differential signals and impedance matching. The interconnection between the internal port groups is realized using a copper differential pair on the PCB. Since signal losses vary significantly between different grades of high-speed differential connectors and between copper differential pairs on different grades of PCBs, an active chip can be added at the back end of the connector to restore and enhance the signal to increase the signal transmission distance on the PCB. The ICAS module of the ICAS-based multi-unit switching device may be implemented on a PCB called a fabric card, or on a PCB called a backplane. The copper differential pair on the PCB interconnects the high-speed differential connector on the PCB to form a full mesh connectivity in the ICAS architecture. The switch chips and related circuits are soldered onto a PCB called a line card, which is equipped with a high-speed differential connector docking to the adapter on the fabric card. A multi-U chassis of the ICAS-based multi-unit switching device includes a plurality of ICAS fabric cards, a plurality of line cards, and one or two MCU- or CPU-based control cards, one or more power modules and cooling fan modules. “Rack unit” (“RU” or “U” for short) measures the height of a data center chassis, equal to 1.75 inches. A complete rack is 48U (48 rack units) in height.
(79) One embodiment of the present invention also provides a chassis-based multi-unit (rack unit) switching device. A multi-unit chassis switching device groups multiple switch ICs onto multiple line cards. Chassis-based multi-unit switching equipment interconnects with line cards, control cards, and CPU cards via PCB-based network cards or backplanes, which saves the cost of transceivers, fiber optic cable and rack space required for interconnection.
(80) Details of an ICAS-based multi-unit chassis switching device 970 are shown in
(81) Multi-unit chassis-based switching device with fabric cards that are ICAS-based full mesh topology has the benefits of improved network congestion, saving the costs and power consumption than that of ASIC-based fabric cards implementation with fat tree topology. As shown in the “ICAS+Multi-unit Chassis” column of Table 5, data center with ICAS and ICAS-based multi-unit chassis-based switching device performs remarkably on data center network with total QSFP transceiver saving by 12.6%, total switching ASIC saving by 53.5%, total power consumption saving by 32.7%, total space saving by 29.95% and much improved network congestion.
(82) The above multi-unit chassis switching device is for illustrative purpose. A person experienced in the art can easily expand the scalability of the multi-unit chassis switching device and should not be limited as in the illustration.
(83) The multi-unit chassis-based switching device has the disadvantage of a much longer development time and a higher cost to manufacture due to its system complexity, and is also limited overall by the form factor of the multi-unit chassis. The multi-unit chassis-based switching device, though provides a much larger port count than the single-chip switching device. Although the stackable switching device requires additional transceivers and cables than that of the multi-unit chassis-based approach, the stackable switching device approach has the advantage of greater manageability in the internal network interconnection, virtually unlimited scalability, and requires significantly less time for assembling a much larger switching device.
(84) The material required for (i) the data center networks of
(85) TABLE-US-00006 TABLE 5 Fat tree + ICAS + ICAS + Multi-unit Multi-unit Stackable Chassis Chassis Chassis Intralink (within Pod) N/A 5 5 Interlink (Across Pod) 4 5 5 Downlink (to Server) 12 15 15 Total 16 25 25 D:U ratio 3 3 3 D:I ratio N/A 3 3 Number of 10 G Interface (for comparison) 96 184.3 184.3 QSFP XCVR Module (Watt) 4 4 4 TOR Switch (Watt) 150 200 200 Multi-unit Chassis (Watt) 1660 0 0 Spine-side Interlink QSFP XCVR 18432 18800 38000 TOR-side Interlink QSFP XCVR 18432 18800 18800 Fabric/TOR-side Intralink QSFP XCVR 36864 18800 18800 Server-side QSFP XCVR 55296 56400 56400 Total QSFP XCVR 129024 112800 (12.6%) 132000 (−2.3%) ASIC in Spine Switch 2304 1600 1600 ASIC in Fabric Switch 4608 0 0 ASIC in TOR Switch 4608 3760 3760 Total Switching ASIC 11520 5360 (53.5%) 5360 (53.5%) Spine Switch (KW) 392.448 327.2 472.0 Fabric Switch (KW) 784.896 0 0 TOR Switch (KW) 986.112 1128.0 1128.0 Total Power Consumption (KW) 2163.456 1455.2 (32.7%) 1600 (26.0%) 96 × QSFP Spine Switch (8U) 1536 0 0 96 × QSFP Fabric Switch (8U) 3072 0 0 48 × QSFP Spine Switch (4U) 0 1600 1600 TOR Switch (1U) 4608 3760 3760 ICAS1X5TRIPLE (1U) 0 0 400 ICAS5X21 (2U) 0 376 376 Transpose Rack (36U) 0 720 720 ICAS2X9 (1U) 0 0 0 ICAS8X33 (4U) 0 0 0 ICAS10X41 (6U) 0 0 0 ICAS16X65 (16U) 0 0 0 Total Rack Unit (U) 9216 6456 (29.95%) 6856 (25.6%) Pod Interlink Bandwidth (Tbps) 7.7 4.0 4.0 Pod Intralink Bandwidth (Tbps) 7.7 4.0 4.0 Total Data Link Bandwidth (Pbps) 2.2 2.2 2.2 Per Plane Uplink Bandwidth (Tbps) 7.7/plane 0 0 Total Spine Uplink Bandwidth (Tbps) 0 150.4 601.6 Total ICAS Uplink Bandwidth (Tbps) 0 37.6 37.6 Spine-side Interlink QSFP Cable 18432 18800 18800 QSFP Fanout Cable (Transpose Rack) 0 37600 37600 QSFP Fanout Cable (ICAS5X21) 0 19740 19740 TOR-side Interlink QSFP Cable 0 18800 18800 TOR-side Intralink QSFP Cable 18432 18800 18800 Spine Switch QSFP Cable 0 0 19200 QSFP Fanout Cable (ICAS1X5TRIPLE) 0 0 19200 Total QSFP Cable 36864 56400 75600 Total QSFP Fanout Cable 0 57340 76540
(86) As shown in Table 5, the ICAS-based systems require significantly less power dissipation, ASICs and space, resulting in reduced material costs and energy.
(87) The above detailed description is provided to illustrate specific embodiments of the present invention and is not intended to be limiting. Numerous modifications and variations within the scope of the present invention are possible. The present invention is set forth in the accompanying claims.