Patent classifications
H04L49/1523
COMMUNICATION CONTROL METHOD AND INFORMATION PROCESSING APPARTUS
In a system including a plurality of nodes, a plurality of first relay devices, and a plurality of second relay devices, where each first relay device is connected to two or more second relay devices, the nodes are classified into a plurality of groups such that different nodes individually connected to different first relay devices having different sets of second relay devices connected thereto are classified into different groups. A representative node is selected from each group. Communication order of a first broadcast operation performed between the representative nodes is determined such that the number of source nodes transmitting data in parallel increases. Communication order of a second broadcast operation performed for each group after the first broadcast operation is determined such that the representative node of the group acts as a first source node and the number of source nodes transmitting the data in parallel increases.
PACKET FLOW CLASSIFICATION IN SPINE-LEAF NETWORKS USING MACHINE LEARNING BASED OVERLAY DISTRIBUTED DECISION TREES
Techniques for generating a multi-layer network topology on a managed network are described herein. An example method includes receiving, from an internetworking device in a network, one or more encrypted packets in a flow; generating a classification decision corresponding to the flow by traversing one or more decision trees; and providing the classification decision to a controller of the network.
Traffic manager resource sharing
A traffic manager is shared amongst two or more egress blocks of a network device, thereby allowing traffic management resources to be shared between the egress blocks. Among other aspects, this may reduce power demands and allow a larger amount of buffer memory to be available to a given egress block that may be experiencing high traffic loads. Optionally, the shared traffic manager may be leveraged to reduce the resources required to handle data units on ingress. Rather than buffer the entire unit in the ingress buffers, an arbiter may be configured to buffer only the control portion of the data unit. The payload of the data unit, by contrast, is forwarded directly to the shared traffic manager, where it is placed in the egress buffers. Because the payload is not being buffered in the ingress buffers, the ingress buffer memory may be greatly reduced.
Communication system and communication method
A communication system includes a plurality of leaf switches connected to a plurality of spine switches in topology of a Latin square Fat-Tree, and a plurality of information processing apparatus, wherein each of the information processing apparatuses performs first all reducing, wherein each of first information processing apparatuses performs second all reducing, on the basis of the result of the first all reducing, between the first information processing apparatus and another first information processing apparatus connected to a leaf switch connected to a first spine switch corresponding to a first direction in an area, wherein each of the first information processing apparatuses performs third all reducing, on the basis of the result of the second all reducing, between the first information processing apparatus and another first information processing apparatus connected to a leaf switch connected to a second spine switch corresponding to a second direction in the area.
Packet flow classification in spine-leaf networks using machine learning based overlay distributed decision trees
Techniques for generating a multi-layer network topology on a managed network are described herein. In an embodiment, data that was collected from a plurality of network devices within a managed network is received and analyzed within a multi-layered plurality of decision trees. The plurality of decision trees include a plurality of nodes, one overlay decision tree, and at least one underlay decision tree. The plurality of nodes include a set of logic nodes that communicatively couples the at least one underlay tree to one of the logic nodes on the overlay tree. The received data is then classified by the plurality of multi-layered decision trees.
Communication network hopping architecture
Communication network systems are disclosed. In one or more implementations, the communication network system includes a plurality of network devices. Each of the plurality of network devices incorporates one or more multi-port switches, where each multi-port switch includes a connection to the network device incorporating the multi-port switch and a connection to at least one other port of another multi-port switch incorporated by another respective one of the plurality of network devices.
Parallel data switch
An interconnect apparatus enables improved signal integrity, even at high clock rates, increased bandwidth, and lower latency. An interconnect apparatus can comprise a plurality of logic units and a plurality of buses coupling the plurality of logic units in a selected configuration of logic units arranged in triplets comprising logic units LA, LC, and LD. The logic units LA and LC are positioned to send data to the logic unit LD. The logic unit LC has priority over the logic unit LA to send data to the logic unit LD. For a packet PKT divided into subpackets, a subpacket of the packet PKT at the logic unit LA, and the packet specifying a target either: (A) the logic unit LC sends a subpacket of the packet PKT to the logic unit LD and the logic unit LA does not send a subpacket of the packet PKT to the logic unit LD; (B) the logic unit LC does not send a subpacket of data to the logic unit LD and the logic unit LA sends a subpacket of the packet PKT to the logic unit LD; or (C) the logic unit LC does not send a subpacket of data to the logic unit LD and the logic unit LA does not send a subpacket of the packet PKT to the logic unit LD.
Parallel Computing System
A parallel computing system is provided, including input ports, a first switching network, a computing array, a second switching network and output ports. The first switching network is receiving input data from the input ports, sequencing the input data according to different computing modes of the computing array and outputting sequenced input data; the computing array is performing parallel computation on the sequenced input data and outputting intermediate data; and the second switching network is sequencing the intermediate data according to different output modes and outputting sequenced intermediate data through the output ports. The present disclosure applies the switching networks to the parallel computing system and performs any required sequencing on the input or output data according to the different computing modes and output modes to complete various arithmetic operations through the computing array after the input data are input into the computing array.
PACKET FLOW CLASSIFICATION IN SPINE-LEAF NETWORKS USING MACHINE LEARNING BASED OVERLAY DISTRIBUTED DECISION TREES
Techniques for generating a multi-layer network topology on a managed network are described herein. In an embodiment, data that was collected from a plurality of network devices within a managed network is received and analyzed within a multi-layered plurality of decision trees. The plurality of decision trees include a plurality of nodes, one overlay decision tree, and at least one underlay decision tree. The plurality of nodes include a set of logic nodes that communicatively couples the at least one underlay tree to one of the logic nodes on the overlay tree. The received data is then classified by the plurality of multi-layered decision trees.
Packet forwarding method and related apparatus
A packet forwarding method and a related apparatus to improve packet forwarding efficiency, where the method includes generating, by a controller, addressing information for a switching node, recording, by the controller, a correspondence between the switching node and the addressing information, and sending, by the controller, the addressing information to the switching node, so that the switching node forwards a packet according to the addressing information.