Patent classifications
H04L45/583
FPGA device for implementing expansion of transmission bandwidth of network-on-chip
The present disclosure discloses an FPGA device for implementing a network-on-chip transmission bandwidth expansion function, and relates to the technical field of FPGAs. When a predefined functional module with built-in hardcore IP nodes is integrated in an FPGA bare die, soft-core IP nodes are configured and formed by using logical resource modules in the FPGA bare die and are connected to the hardcore IP nodes to form an NOC network structure, so as to increase nodes and expand the transmission bandwidth of the predefined functional module. On the other hand, the soft-core IP nodes can be additionally connected to input and output signals in the predefined functional module and also can expand the transmission bandwidth of the predefined functional module.
Stacking switch unit and method used in stacking switch unit
A method used in a stacking/stackable switch unit includes: providing a plurality of signal ports of the stacking/stackable switch unit, the signal ports having at least one master/slave control port corresponding to at least one operation function of the stacking/stackable switch unit; during a boot-up procedure, automatically determining whether the stacking/stackable switch unit is a master or a slave according to at least one signal level of the at least one master/slave control port and/or content of at least one bit obtained from the at least one master/slave control port.
NETWORK SERVICE INTEGRATION INTO A NETWORK FABRIC OF A DATA CENTER
Top-of-rack (TOR) switches are connected to a network fabric of a data center. Each TOR switch corresponds to a rack of the data center, and is configured to provide access to the network fabric for computing devices mounted in the rack. In one method, a client device of a user is used to select various network service options. The service options correspond to services that can be provided to computing equipment of the user that is mounted in various racks of the data center. In response to receiving the selection of one or more service options, the network fabric of the data center is configured to connect the computing equipment to the selected services. In one approach, the network fabric is configured by creating and/or configuring one or more virtual networks to provide the connection to the services.
Apparatus, system, and method for steering traffic over network slices
A disclosed method may include (1) receiving, at a network node within a network, a packet from another network node within the network, (2) identifying, within the packet, a slice label that indicates a network slice that has been logically partitioned on the network, (3) determining a QoS policy that corresponds to the network slice indicated by the slice label, (4) applying the QoS policy to the packet, and then upon applying the QoS policy to the packet, (5) forwarding the packet to an additional network node within the network. Various other apparatuses, systems, and methods are also disclosed.
Ordered stack formation with reduced manual intervention
A member switch of multiple connected switches receives a stack-discovery packet from a first coupled switch and, in response, generates and transmits a stack-discovery-response packet to the first coupled switch to allow the member switch to be discovered. The member switch receives stack-configuration information from a stack-control node and forwards the stack-discovery packet to a second coupled switch to facilitate discovery of the second coupled switch. The first coupled switch, the member switch, and the second coupled switch are coupled to each other according to a predetermined order, thereby facilitating an ordered discovery of the multiple connected switches. In response to receiving, from the stack-control node, a control packet, the member switch reboots based on the received stack-configuration information. The stack-configuration information comprises a stack-member identifier allocated, based on the predetermined order, by the stack-control mode to the member switch, thereby facilitating formation of an ordered stack.
Expandable network device
Methods, apparatus, and systems for incorporating a dynamic interface into an expandable network device. A section of memory of the expandable network device is partitioned for the dynamic interface and the dynamic interface is loaded into the partitioned section of the memory. A hardware interface of the expandable network device is configured to communicate with the dynamic interface under a control of the dynamic interface; and a communication channel is established between a network interface of the expandable network device and the hardware interface of the expandable network device via the dynamic interface.
STACKING SWITCH UNIT AND METHOD USED IN STACKING SWITCH UNIT
A method used in a stacking/stackable switch unit includes: providing a plurality of signal ports of the stacking/stackable switch unit, the signal ports having at least one master/slave control port corresponding to at least one operation function of the stacking/stackable switch unit; during a boot-up procedure, automatically determining whether the stacking/stackable switch unit is a master or a slave according to at least one signal level of the at least one master/slave control port and/or content of at least one bit obtained from the at least one master/slave control port.
Dynamically managing encryption for virtual routing (VRF) and forwarding (VRF) using route targets and unique VRF identifiers
Described herein are systems, methods, and software to manage virtual routing and forwarding (VRF) in a computing environments. In one example, a management service identifies a registration or import of a route target (RT) to communicate in a VRF and identifies a first unique identifier associated with the RT. The management service further identifies a second unique identifier associated with the VRF and compares the first unique identifier to the second unique identifier. When the unique identifiers match, the management service determines that intra-VRF encryption is required for the communication. In contrast, when the unique identifiers do not match, then the management service determine that inter-VRF encryption is required for the communication.
INVALIDATING CACHED FLOW INFORMATION IN A CLOUD INFRASTRUCTURE
Techniques for managing the distribution of configuration information that supports the flow of packets in a cloud environment are described. In an example, a virtual network interface card (VNIC) hosted on a network virtualization device NVD receives a first packet from a compute instance associated with the VNIC. The VNIC determines that flow information to send the first packet on a virtual network is unavailable from a memory of the NVD. The VNIC sends, via the NVD, the first packet to a network interface service, where the network interface service maintains configuration information to send packets on the substrate network and is configured to send the first packet on the substrate network based on the configuration information. The NVD receives the flow information from the network interface service, where the flow information is a subset of the configuration information. The NVD stores the flow information in the memory.
PACKET FLOW IN A CLOUD INFRASTRUCTURE BASED ON CACHED AND NON-CACHED CONFIGURATION INFORMATION
Techniques for managing the distribution of configuration information that supports the flow of packets in a cloud environment are described. In an example, a virtual network interface card (VNIC) hosted on a network virtualization device NVD receives a first packet from a compute instance associated with the VNIC. The VNIC determines that flow information to send the first packet on a virtual network is unavailable from a memory of the NVD. The VNIC sends, via the NVD, the first packet to a network interface service, where the network interface service maintains configuration information to send packets on the substrate network and is configured to send the first packet on the substrate network based on the configuration information. The NVD receives the flow information from the network interface service, where the flow information is a subset of the configuration information. The NVD stores the flow information in the memory.