H04L49/102

Autonomously routing data using relay nodes pre-selected from a group of distributed computer nodes based on measured one-way latencies

A method is disclosed for autonomously routing data using relay nodes pre-selected from a group of distributed computer nodes based on measured one-way latencies. One-way latencies between a plurality of nodes in a pulse group are automatically measured. A sending bucket of nodes are automatically selected from the pulse group based on the one-way latencies. A receiving bucket of nodes are automatically selected from the pulse group based on the one-way latencies. In response to a command to transfer data from the first node to the second node, a relay node that is both in the first sending bucket and in the first receiving bucket is automatically selected, wherein data is automatically routed from the first node to the second node via the relay node.

HYPERSCALAR PACKET PROCESSING
20210203597 · 2021-07-01 ·

The disclosed systems and methods provide hyperscalar packet processing. A method includes receiving a plurality of network packets from a plurality of data paths. The method also includes arbitrating, based at least in part on an arbitration policy, the plurality of network packets to a plurality of packet processing blocks comprising one or more full processing blocks and one or more limited processing blocks. The method also includes processing, in parallel, the plurality of network packets via the plurality of packet processing blocks, wherein each of the one or more full processing blocks processes a first quantity of network packets during a clock cycle, and wherein each of the one or more limited processing blocks processes a second quantity of network packets during the clock cycle that is greater than the first quantity of network packets. The method also includes sending the processed network packets through data buses.

Fast scheduling and optimization of multi-stage hierarchical networks
10992597 · 2021-04-27 · ·

Significantly optimized multi-stage networks including scheduling methods for faster scheduling of connections, useful in wide target applications, with VLSI layouts using only horizontal wires and vertical wires to route large scale partial multi-stage hierarchical networks having inlet and outlet links, and laid out in an integrated circuit device in a two-dimensional grid arrangement of blocks are disclosed. The optimized multi-stage networks in each block employ one or more slices of rings of stages of switches with inlet and outlet links of partial multi-stage hierarchical networks connecting to rings from either left-hand side or right-hand side; and employ hop wires or multi-drop hop wires wherein hop wires or multi-drop wires are connected from switches of stages of rings of slices of a first partial multi-stage hierarchical network switches of stages of a rings of slices of the first or a second partial multi-stage hierarchical network.

Router fabric for switching broadcast signals in a media processing network
11848873 · 2023-12-19 · ·

A router fabric for switching real time broadcast video signals in a media processing network includes a logic device configured to route multiple channels of packetized video signals to another network device, a crossbar switch configured to be coupled to a plurality of input/output components and to switch video data of the multiple channels between the logic device and the plurality of input/output components in response to a control instruction, and a controller configured to map routing addresses for each video signal relative to the system clock, and to send the control instruction with the mapping to the crossbar switch and the logic device.

Router fabric for switching broadcast signals in a media processing network
11848873 · 2023-12-19 · ·

A router fabric for switching real time broadcast video signals in a media processing network includes a logic device configured to route multiple channels of packetized video signals to another network device, a crossbar switch configured to be coupled to a plurality of input/output components and to switch video data of the multiple channels between the logic device and the plurality of input/output components in response to a control instruction, and a controller configured to map routing addresses for each video signal relative to the system clock, and to send the control instruction with the mapping to the crossbar switch and the logic device.

Ethernet switch and remote transmission method thereof

The present application discloses a long-distance transmission method for an Ethernet switch including a network switching module, an MCU module and a dial code module. The MCU module is connected to the network switching module and the dial code module. The dial code module is configured for providing two configuration inputs for a normal mode and a long-distance mode for user equipment. The MCU module is configured for monitoring a configuration input state of the dial code module in real time. When detecting that the dial code module is in the configuration input for the normal mode, the MCU module configures a network port of the network switching module to be in a self-negotiation mode. When detecting that the dial code module is in the configuration input state for the long-distance mode, the MCU module configures the network port of the network switching module to be in a 10 Mbps full-duplex mode and controls an amplitude of an output voltage of a network signal of the network switching module to increase. The network switching module is configured for negotiating a network link bandwidth of 10 Mbps and a full duplex mode between the network switching module and the user equipment for long-distance data transmission according to a configuration made by the MCU module when the dial code module is in the long-distance mode. The embodiments of the present application are applied to long-distance data transmission.

Automatic multi-stage fabric generation for FPGAs
10965618 · 2021-03-30 · ·

Systems and methods to automatically or manually generate various multi-stage pyramid network based fabrics, either partially connected or fully connected, are disclosed by changing different parameters of multi-stage pyramid network including such as number of slices, number of rings, number of stages, number of switches, number of multiplexers, the size of the multiplexers in any switch, connections between stages of rings either between the same numbered stages (same level stages) or different numbered stages, single or multi-drop hop wires, hop wires of different hop lengths, hop wires outgoing to different directions, hop wires incoming from different directions, number of hop wires based on the number and type of inlet and outlet links of large scale sub-integrated circuit blocks. One or more parameters are changed in each iteration so that optimized fabrics are generated, at the end of iterations, to route a given set of benchmarks or designs having a specific connection requirements.

AUTOMATED LINK AGGREGATION GROUP CONFIGURATION SYSTEM

An automated Link Aggregation Group (LAG) configuration system includes a plurality of slave switch devices that are each coupled to an endhost device by at least one respective link. Each of the plurality of slave switch devices receives a Link Aggregation Group (LAG) communication from the endhost device, and forwards endhost device information in that LAG communication to a master switch device. The master switch device receives endhost device information from each of the plurality of slave switch devices and determines that each of the plurality of slave switch devices are coupled to the endhost device. In response, the master switch device sends a LAG instruction to each of the plurality of slave switch devices that causes the at least one respective link that couples each of the plurality of slave switch devices to the endhost device to be configured in a LAG.

Hardware acceleration for uploading/downloading databases

A network element includes one or more ports for communicating over a network, a processor and packet processing hardware. The packet processing hardware is configured to transfer packets to and from the ports, and further includes data-transfer circuitry for data transfer with the processor. The processor and the data-transfer circuitry are configured to transfer between one another (i) one or more communication packets for transferal between the ports and the processor and (ii) one or more databases for transferal between the packet processing hardware and the processor, by (i) translating, by the processor, the transferal of both the communication packets and the databases into work elements, and posting the work elements on one or more work queues in a memory of the processor, and (ii) using the data-transfer circuitry, executing the work elements so as to transfer both the communication packets and the databases.

Hardware acceleration for uploading/downloading databases

A network element includes one or more ports for communicating over a network, a processor and packet processing hardware. The packet processing hardware is configured to transfer packets to and from the ports, and further includes data-transfer circuitry for data transfer with the processor. The processor and the data-transfer circuitry are configured to transfer between one another (i) one or more communication packets for transferal between the ports and the processor and (ii) one or more databases for transferal between the packet processing hardware and the processor, by (i) translating, by the processor, the transferal of both the communication packets and the databases into work elements, and posting the work elements on one or more work queues in a memory of the processor, and (ii) using the data-transfer circuitry, executing the work elements so as to transfer both the communication packets and the databases.