Patent classifications
H04L49/1507
Fast optical switch
A fast optical switch and networks comprising fast optical switches are disclosed herein. In an example embodiment, a fast optical switch includes two or more fabric switches; a first selector switch; and a second selector switch. The first selector switch may selectively pass a signal to one of the two or more fabric switches. The one of the two or more fabric switches may act on the received signal to provide a switched signal and the second selector switch may selectively receive the switched signal provided by the one of the two or more fabric switches. A slot of the fast optical switch comprises a transmission window of one of the two or more fabric switches that occurs in parallel with at least a portion of a reconfiguration window of the other of the two or more fabric switches.
SELECTION OF MEMBER PORTS IN A LINK AGGREGATION GROUP
This disclosure describes techniques that include selecting a member port of an aggregation bundle by evaluating utilization of paths, within a router, to member ports of an aggregation bundle. In one example, this disclosure describes a method that includes receiving network data to be output through an aggregation bundle having a plurality of member ports; identifying local member ports; identifying non-local member ports, each of the non-local member ports being reachable from the receiving line card over a path through the switch fabric to a different one of the plurality of line cards; identifying available non-local member ports by determining, for each non-local member port, whether the path through the switch fabric has low utilization; and selecting a member port by applying a hashing algorithm to a group that includes each of the identified available non-local member ports.
Selection of member ports in a link aggregation group
This disclosure describes techniques that include selecting a member port of an aggregation bundle by evaluating utilization of paths, within a router, to member ports of an aggregation bundle. In one example, this disclosure describes a method that includes receiving network data to be output through an aggregation bundle having a plurality of member ports; identifying local member ports; identifying non-local member ports, each of the non-local member ports being reachable from the receiving line card over a path through the switch fabric to a different one of the plurality of line cards; identifying available non-local member ports by determining, for each non-local member port, whether the path through the switch fabric has low utilization; and selecting a member port by applying a hashing algorithm to a group that includes each of the identified available non-local member ports.
VLSI layouts of fully connected generalized and pyramid networks with locality exploitation
VLSI layouts of generalized multi-stage and pyramid networks for broadcast, unicast and multicast connections are presented using only horizontal and vertical links with spacial locality exploitation. The VLSI layouts employ shuffle exchange links where outlet links of cross links from switches in a stage in one sub-integrated circuit block are connected to inlet links of switches in the succeeding stage in another sub-integrated circuit block so that said cross links are either vertical links or horizontal and vice versa. Furthermore the shuffle exchange links are employed between different sub-integrated circuit blocks so that spatially nearer sub-integrated circuit blocks are connected with shorter links compared to the shuffle exchange links between spatially farther sub-integrated circuit blocks. In one embodiment the sub-integrated circuit blocks are arranged in a hypercube arrangement in a two-dimensional plane. The VLSI layouts exploit the benefits of significantly lower cross points, lower signal latency, lower power and full connectivity with significantly fast compilation. The VLSI layouts with spacial locality exploitation presented are applicable to generalized multi-stage and pyramid networks, generalized folded multi-stage and pyramid networks, generalized butterfly fat tree and pyramid networks, generalized multi-link multi-stage and pyramid networks, generalized folded multi-link multi-stage and pyramid networks, generalized multi-link butterfly fat tree and pyramid networks, generalized hypercube networks, and generalized cube connected cycles networks for speedup of s≥1. The embodiments of VLSI layouts are useful in wide target applications such as FPGAs, CPLDs, pSoCs, ASIC placement and route tools, networking applications, parallel & distributed computing, and reconfigurable computing.
AUTOMATED MULTI-FABRIC LINK AGGREGATION SYSTEM
An automated multi-fabric link aggregation system includes leaf switch devices that have leaf switch device downlink ports, that are included in a first network fabric, and that are aggregated to provide a first aggregation fabric. Each leaf switch device generates discovery communications including a first network fabric identifier for the first network fabric, and a first aggregation fabric identifier for the first aggregation fabric. The leaf switch devices then transmit the discovery communications via the leaf switch device downlink ports. I/O modules that have I/O module uplink port are included in a second network fabric and are aggregated to provide a second aggregation fabric. The I/O modules receive the discovery communications via each of the I/O module uplink ports, determine that each received discovery communication includes the first network fabric identifier and the first aggregation fabric identifier and, in response, automatically configure the I/O module uplink ports in a LAG.
SERVER, SERVER SYSTEM, AND METHOD OF INCREASING NETWORK BANDWIDTH OF SERVER
[Problem] An available network bandwidth is increased without limiting processing of applications.
[Solution] A server 20A includes a normal NIC 11 as an NIC having an expansion function, and a virtual patch panel 21 having a transfer function of transferring packets between the normal NIC 11 and an accelerator utilization type NIC 15, which is implemented by software. The server 20A is configured such that, when a packet is transferred between the normal NIC 11 and the accelerator utilization type NIC 15 via the virtual patch panel 21, the target function 16 transfers the packet to and from the APLs 12a to 12c.
NETWORK INTERCONNECT AS A SWITCH
An interconnect as a switch module (“ICAS” module) comprising n port groups, each port group comprising n−1 interfaces, and an interconnecting network implementing a full mesh topology where each port group comprising a plurality of interfaces each connects an interface of one of the other port groups, respectively. The ICAS module may be optically or electrically implemented. According to the embodiments, the ICAS module may be used to construct a stackable switching device and a multi-unit switching device, to replace a data center fabric switch, and to build a new, high-efficient, and cost-effective data center.
Multicast network and memory transfer optimizations for neural network hardware acceleration
Neural network specific hardware acceleration optimizations are disclosed, including an optimized multicast network and an optimized DRAM transfer unit to perform in constant or linear time. The multicast network is a set of switch nodes organized into layers and configured to operate as a Beneš network. Configuration data may be accessed by all switch nodes in the network. Each layer is configured to perform a Beneš network transformation of the -previous layer within a computer instruction. Since the computer instructions are pipelined, the entire network of switch nodes may be configured in constant or linear time. Similarly a DRAM transfer unit configured to access memory in strides organizes memory into banks indexed by prime or relatively prime number amounts. The index value is selected as not to cause memory address collisions. Upon receiving a memory specification, the DRAM transfer unit may calculate out strides thereby accessing an entire tile of a tensor in constant or linear time.
TRANSFER DEVICE, TRANSFER SYSTEM, TRANSFER METHOD, AND PROGRAM
[Problem] Connection between a centralized control apparatus and a group of transfer apparatuses can be prevented from having a single point of failure. Traffic can be distributed among a plurality of paths. A bypass path is selected when a failure occurs in a switch cluster.
[Solution] Transfer apparatuses 61a to 61d perform communications for path control with a centralized control apparatus 73 that performs centralized control from the outside of a switch cluster including the group of transfer apparatuses, through a path similar to D-plane (main signal). A packet flow controller 87 serving as a separation unit that separates a packet for the inside of the cluster 61 and a packet for the outside of the cluster transmitted through the similar path from each other, and an internal route engine 85 that performs path control of obtaining a path for freely passing through a plurality of paths in the cluster are provided. The packet flow controller 87 separates a path control packet for the inside of the cluster, and the engine 85 performs, when a failure to communicate the path control packet for the inside thus separated occurs, path control of generating a path that bypasses a path with the failure.
Network interconnect as a switch
An interconnect as a switch module (“ICAS” module) comprising n port groups, each port group comprising n−1 interfaces, and an interconnecting network implementing a full mesh topology where each port group comprising a plurality of interfaces each connects an interface of one of the other port groups, respectively. The ICAS module may be optically or electrically implemented. According to the embodiments, the ICAS module may be used to construct a stackable switching device and a multi-unit switching device, to replace a data center fabric switch, and to build a new, high-efficient, and cost-effective data center.