Patent classifications
H04L49/10
Method and Apparatus to Optimize Multi-Destination Traffic Over Etherchannel in Stackwise Virtual Topology
Methods and systems are disclosed. The method comprises: designating a first plurality of links from a first stack segment to a second stack segment as a first etherchannel link; designating a second plurality of links from the first stack segment to a third stack segment as a second etherchannel link, where the second stack segment and the third stack segment are in communication with a fourth stack segment; designating the first etherchannel link and the second etherchannel link as members of a hierarchical etherchannel link; and sending a packet from the first stack segment to the fourth stack segment using the hierarchical etherchannel link.
Computing system with hardware reconfiguration mechanism and method of operation thereof
A method of operation of a computing system includes: providing a first cluster having a first kernel unit for managing a first reconfigurable hardware device; analyzing an application descriptor associated with an application; generating a first bitstream based on the application descriptor for loading the first reconfigurable hardware device, the first bitstream for implementing at least a first portion of the application; and implementing a first fragment with the first bitstream in the first cluster.
Computing system with hardware reconfiguration mechanism and method of operation thereof
A method of operation of a computing system includes: providing a first cluster having a first kernel unit for managing a first reconfigurable hardware device; analyzing an application descriptor associated with an application; generating a first bitstream based on the application descriptor for loading the first reconfigurable hardware device, the first bitstream for implementing at least a first portion of the application; and implementing a first fragment with the first bitstream in the first cluster.
EVPN multicast ingress forwarder election using source-active route
The techniques describe example network systems providing core-facing designated forwarder (DF) election to forward multicast traffic into an EVPN of a core network. For example, a first PE device of a plurality of PE devices participating in an EVPN comprises one or more processors operably coupled to a memory, wherein the one or more processors are configured to: determine that a first multicast traffic flow has started for the first PE device; in response, send a source-active (SA) route to indicate the first multicast traffic flow has started for the first PE device; receive, from a second PE device, a second SA route that indicates that a second multicast traffic flow has started for the second PE device; and perform an election of a core-facing DF from among the first PE device and second PE device, wherein the core-facing DF is configured to forward the multicast traffic into the EVPN.
EVPN multicast ingress forwarder election using source-active route
The techniques describe example network systems providing core-facing designated forwarder (DF) election to forward multicast traffic into an EVPN of a core network. For example, a first PE device of a plurality of PE devices participating in an EVPN comprises one or more processors operably coupled to a memory, wherein the one or more processors are configured to: determine that a first multicast traffic flow has started for the first PE device; in response, send a source-active (SA) route to indicate the first multicast traffic flow has started for the first PE device; receive, from a second PE device, a second SA route that indicates that a second multicast traffic flow has started for the second PE device; and perform an election of a core-facing DF from among the first PE device and second PE device, wherein the core-facing DF is configured to forward the multicast traffic into the EVPN.
Automated access to racks in a colocation data center
Top-of-rack (TOR) switches are connected to a network fabric of a data center. Each TOR switch corresponds to a rack of the data center, and is configured to provide access to the network fabric for computing devices mounted in the rack. In one method, a TOR switch is mounted in a rack. The TOR switch is connected to a network fabric of a data center. A lock is used to control physical access to the rack. A request to physically access the rack is received from a computing device (e.g., a badge implementing a security token, or a mobile device). The request includes authentication credentials. The computing device is then authenticated. In response to authenticating the computing device, the lock is configured to provide physical access to the rack.
Automated access to racks in a colocation data center
Top-of-rack (TOR) switches are connected to a network fabric of a data center. Each TOR switch corresponds to a rack of the data center, and is configured to provide access to the network fabric for computing devices mounted in the rack. In one method, a TOR switch is mounted in a rack. The TOR switch is connected to a network fabric of a data center. A lock is used to control physical access to the rack. A request to physically access the rack is received from a computing device (e.g., a badge implementing a security token, or a mobile device). The request includes authentication credentials. The computing device is then authenticated. In response to authenticating the computing device, the lock is configured to provide physical access to the rack.
CONNECTING PROCESSORS USING TWISTED TORUS CONFIGURATIONS
Methods, systems, and apparatus, including computer programs encoded on computer-storage media, for connecting processors using twisted torus configurations. In some implementations, a cluster of processing nodes is coupled using a reconfigurable interconnect fabric. The system determines a number of processing nodes to allocate as a network within the cluster and a topology for the network. The system selects an interconnection scheme for the network, where the interconnection scheme is selected from a group that includes at least a torus interconnection scheme and a twisted torus interconnection scheme. The system allocates the determined number of processing nodes of the cluster in the determined topology, sets the reconfigurable interconnect fabric to provide the selected interconnection scheme for the processing nodes in the network, and provides access to the network for performing a computing task.
CONNECTING PROCESSORS USING TWISTED TORUS CONFIGURATIONS
Methods, systems, and apparatus, including computer programs encoded on computer-storage media, for connecting processors using twisted torus configurations. In some implementations, a cluster of processing nodes is coupled using a reconfigurable interconnect fabric. The system determines a number of processing nodes to allocate as a network within the cluster and a topology for the network. The system selects an interconnection scheme for the network, where the interconnection scheme is selected from a group that includes at least a torus interconnection scheme and a twisted torus interconnection scheme. The system allocates the determined number of processing nodes of the cluster in the determined topology, sets the reconfigurable interconnect fabric to provide the selected interconnection scheme for the processing nodes in the network, and provides access to the network for performing a computing task.
AUTOMATED ACCESS TO RACKS IN A COLOCATION DATA CENTER
Top-of-rack (TOR) switches are connected to a network fabric of a data center. Each TOR switch corresponds to a rack of the data center, and is configured to provide access to the network fabric for computing devices mounted in the rack. In one method, a TOR switch is mounted in a rack. The TOR switch is connected to a network fabric of a data center. A lock is used to control physical access to the rack. A request to physically access the rack is received from a computing device (e.g., a badge implementing a security token, or a mobile device). The request includes authentication credentials. The computing device is then authenticated. In response to authenticating the computing device, the lock is configured to provide physical access to the rack.