Patent classifications
H04L49/1523
METHOD FOR IMPLEMENTING A LINE SPEED INTERCONNECT STRUCTURE
A method and apparatus including a cache controller coupled to a cache memory, wherein the cache controller receives a plurality of cache access requests, performs a pre-sorting of the plurality of cache access requests by a first stage of the cache controller to order the plurality of cache access requests, wherein the first stage functions by performing a presorting and pre-clustering process on the plurality of cache access requests in parallel to map the plurality of cache access requests from a first position to a second position corresponding to ports or banks of a cache memory, performs the combining and splitting of the plurality of cache access request by a second stage of the cache controller, and applies the plurality of cache access requests to the cache memory at line speed.
Authentication and data lane control
An authentication method, network switch, and network device are provided. In one example, a method is described that includes receiving a first signal indicative of a data lane being activated and configured to carry data to or within the network switch, receiving a second signal indicative of an authentication lane being established in the network switch or a device connected to the network switch, where the authentication lane is different from the data lane, and enabling data transmission across the data lane only in response to receiving the second signal indicative of the authentication lane being established.
Methods and systems for classifying traffic flows based on packet processing metadata
Methods and system for directing traffic flows to a fast data path or a slow data path are disclosed. Parsers can produce packet header vectors (PHVs) for use in match-action units. The PHVs are also used to generate feature vectors for the traffic flows. A flow training engine produces a classification model. Feature vectors input to the classification model result in output predictions predicting if a traffic flow will be long lived or short lived. The classification models are used by network appliances to install traffic flows into fast data paths or the slow data paths based on the predictions.
Controller and system
According to one embodiment, a controller may include a user interface that is operable to receive input from a user to control an electronic system to which the controller may be coupled either directly or indirectly. The user interface may comprise an interface housing to which the user interface is coupled, the interface housing having a front portion and a rear portion, the front portion of which may contain the user interface. A controller housing may be coupled to the rear portion of the interface housing, the controller housing having a smaller perimeter than the interface housing. The controller housing may be comprised of at least one sidewall and a rear wall. At least one magnet may be coupled to the controller housing. The magnet(s) may be operable to hold the controller in position using magnetic force when the controller housing is inserted into a mounting receptacle.
AUTOMATED MULTI-FABRIC LINK AGGREGATION SYSTEM
An automated multi-fabric link aggregation system includes leaf switch devices that have leaf switch device downlink ports, that are included in a first network fabric, and that are aggregated to provide a first aggregation fabric. Each leaf switch device generates discovery communications including a first network fabric identifier for the first network fabric, and a first aggregation fabric identifier for the first aggregation fabric. The leaf switch devices then transmit the discovery communications via the leaf switch device downlink ports. I/O modules that have I/O module uplink port are included in a second network fabric and are aggregated to provide a second aggregation fabric. The I/O modules receive the discovery communications via each of the I/O module uplink ports, determine that each received discovery communication includes the first network fabric identifier and the first aggregation fabric identifier and, in response, automatically configure the I/O module uplink ports in a LAG.
Automatic network assembly
Some examples provide a method for automatic network assembly. The following instructions may be used to implement automatic network assembly in a modular infrastructure. Instructions to automatically connect a management port to a management network. Instructions to automatically connect link ports to form a scalable ring. Instructions to automatically connect each modular infrastructure management device to a bay management network port.
DATA CENTER NETWORK HAVING OPTICAL PERMUTORS
A network system for a data center is described in which a switch fabric may provide full mesh interconnectivity such that any servers may communicate packet data to any other of the servers using any of a number of parallel data paths. Moreover, according to the techniques described herein, edge-positioned access nodes, optical permutation devices and core switches of the switch fabric may be configured and arranged in a way such that the parallel data paths provide single L2/L3 hop, full mesh interconnections between any pairwise combination of the access nodes, even in massive data centers having tens of thousands of servers. The plurality of optical permutation devices permute communications across the optical ports based on wavelength so as to provide, in some cases, full-mesh optical connectivity between edge-facing ports and core-facing ports.
Communication control method and information processing apparatus
In a system including a plurality of nodes, a plurality of first relay devices, and a plurality of second relay devices, where each first relay device is connected to two or more second relay devices, the nodes are classified into a plurality of groups such that different nodes individually connected to different first relay devices having different sets of second relay devices connected thereto are classified into different groups. A representative node is selected from each group. Communication order of a first broadcast operation performed between the representative nodes is determined such that the number of source nodes transmitting data in parallel increases. Communication order of a second broadcast operation performed for each group after the first broadcast operation is determined such that the representative node of the group acts as a first source node and the number of source nodes transmitting the data in parallel increases.
SPRAYING FOR UNEQUAL LINK CONNECTIONS IN AN INTERNAL SWITCH FABRIC
In general, techniques are described for facilitating balanced cell handling by fabric cores of a fabric plane for an internal device switch fabric. In some examples, a routing system includes a plurality of fabric endpoints and a switching fabric comprising a fabric plane to switch cells among the fabric endpoints. The fabric plane includes two fabric cores and one or more inter-core links connecting the fabric cores. Each fabric core selects an output port of the fabric core to which to route a received cell of the cells based on (i) an input port of the fabric core on which the received cell was received and (ii) a destination fabric endpoint for the received cell, at least a portion of the selected output ports being connected to the inter-core links, and switches the received cell to the selected output port.
Enhanced handling of multicast data streams within a broadband access network of a telecommunications network
A method for handling of multicast data streams within a broadband access network of a telecommunications network includes: a specific service edge node receiving a join request message from or through a specific customer premises equipment; an activation request message being transmitted, by the specific service edge node, to a multicast controller node within the broadband access network; and the multicast controller node generating and/or replicating multicast data stream-related data packets and injecting these multicast data stream-related data packets, using a corresponding session identifier information, into a point-to-point-protocol connection or tunneling protocol connection between the specific service edge node and the specific customer premises equipment.