Patent classifications
H04L49/3018
Converged network interface card, message coding method and message transmission method thereof
The invention provides a converged network interface card, a message coding method and a message transmission method thereof. The converged network interface card comprises a PCIE host interface processing module, a high speed network card core logic, a crossbar switch XBAR, an Ethernet network card core logic, an Ethernet message dicing/slicing module, a physical layer, a high speed network/Ethernet message conversion module EoH, and a high speed network/Ethernet configurable network port. The invention supports customized high speed interconnection interface and a standard Ethernet interface on a set of network hardware, and supports three working modes on a set of physical hardware (high speed network mode, Ethernet mode and EoH mode transmitting Ethernet messages over the high speed network), implements seamless compatibility between the high speed network/Ethernet, and flexibly supports multimode applications such as scientific computing and cloud computing.
METHODS AND SYSTEMS FOR LINE RATE PACKET CLASSIFIERS FOR PRESORTING NETWORK PACKETS ONTO INGRESS QUEUES
A network appliance can have an input port that can receive network packets at line rate, two or more ingress queues, a line rate classification circuit that can place the network packets on the ingress queues at the line rate, a packet buffer that can store the network packets, and a sub line rate packet processing circuit that can process the network packets that are stored in the packet buffer. The line rate classification circuit can place a network packet on one of the ingress queues based on the network packet's packet contents. A buffer scheduler can select network packets for processing by a sub line rate packet processing circuit based on the priority levels of the ingress to queues.
PACKET SWITCHES
Switches for performing packet switching and associated methods are provided. An example switch includes an ingress port for receiving a packet. The switch includes a plurality of egress ports for discharging the packet from the switch. The switch includes a plurality of egress queues with each egress queue associated with one of the plurality of egress ports. The switch includes a control plane configured to determine a descriptor associated with a packet, determine a first egress port from which to discharge the at least one packet and to transmit the descriptor to an egress queue associated with the first egress port. The switch includes a descriptor crossbar configured to transmit the descriptor from the egress queue to a second egress port of the plurality of egress ports. The switch includes a packet crossbar configured to transmit the at least one packet from the ingress port to the second egress port.
Fair arbitration between multiple sources targeting a destination
A hardware module comprises at least a first ingress buffer and a second ingress buffer, where the second ingress buffer holds data packets from a plurality of source components. To ensure fairness between one or more sources providing data to the first ingress buffer and the plurality of sources providing data to the second ingress buffer, processing circuitry examines source identifiers in packets held in the second ingress buffer and selects between the buffers so as to arbitrate between the sources. In some embodiments, the examination of the source identifiers provides statistics for a weighted round robin between the ingress buffers. In other embodiments, the source identifier of whichever packet is currently at the head of the second ingress buffer is used to perform a simple round robin between the sources.
DATA VALIDITY BASED NETWORK BUFFER MANAGEMENT SYSTEM
Systems and methods for data scheduling and queuing. A data network node is configured to transmit data in a store-and-forward fashion. The data network node includes a delay and validity determination module that determines and assigns a validity value to each data packet incoming via an input port based on a time stamp of the data packet, a current time value, an expected delay on a route of the data packet to its destination, and a packet urgency value. A scheduling module and a queue managing module execute their functions based on the validity value assigned to a data packet in a transmission buffer.
High Bandwidth Content Addressable Memory (CAM) Based Hardware Architecture For Datacenter Networking
A communication protocol system is provided for reliable transport of packets. A content addressable memory hardware architecture including a reorder engine and a retransmission engine may be utilized for the reliable transport of the packets. The content addressable memory module includes a primary CAM that may be logically partitioned into a plurality of physical sub-CAMs. One or more processors are in communication with the content addressable memory module. The one or more processors receive a set of data packets. A lookup operation is performed by the one or more processors to access data entries stored in each of the sub-content addressable memories. An update operation is performed by the one or more processors at a selected sub-content addressable memory from the plurality of the sub-content addressable memories.
Distributed artificial intelligence extension modules for network switches
Distributed machine learning systems and other distributed computing systems are improved by compute logic embedded in extension modules coupled directly to network switches. The compute logic performs collective actions, such as reduction operations, on gradients or other compute data processed by the nodes of the system. The reduction operations may include, for instance, summation, averaging, bitwise operations, and so forth. In this manner, the extension modules may take over some or all of the processing of the distributed system during the collective phase. An inline version of the module sits between a switch and the network. Data units carrying compute data are intercepted and processed using the compute logic, while other data units pass through the module transparently to or from the switch. Multiple modules may be connected to the switch, each coupled to a different group of nodes, and sharing intermediate results. A sidecar version is also described.
Quality of service in virtual service networks
A switch in a slice-based network can be used to enforce quality of service (“QoS”). Agents can run in the switches, such as in the core of each switch. The switches can sort ingress packets into slice-specific ingress queues in a slice-based pool. The slices can have different QoS prioritizations. A switch-wide policing algorithm can move the slice-specific packets to egress interfaces. Then, one or more user-defined egress policing algorithms can prioritize which packets are sent out into the network first based on slice classifications.
METHOD AND SYSTEM FOR FACILITATING LOSSY DROPPING AND ECN MARKING
Methods and systems are provided for performing lossy dropping and ECN marking in a flow-based network. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow are acknowledged after reaching the egress point of the network, and the acknowledgement packets are sent back to the ingress point of the flow along the same data path. As a result, each switch can obtain state information of each flow and perform per-flow packet dropping and ECN marking.
EZ-pass: an energy performance-efficient power-gating router architecture for scalable on-chip interconnect architecture
With the advent of manycore architecture, on-chip interconnect connects a number of cores, caches, memory modules, accelerators, graphic processing unit (GPU) or chiplets in one system. However, on-chip interconnect architecture consumes a significant portion of total parallel computing chip power. Power-gating is an effective technique to reduce power consumption by powering off the routers, but it suffers from a large wake-up latency to resume the full activity of routers. Recent research aims to improve the wake-up latency penalty by hiding it through early wake-up techniques. However, these techniques do not exploit the full advantage of power-gating due to the early wake-up. Consequently, they do not achieve significant power savings. The present invention provides a new router architecture that remedies the large wake-up latency overheads while providing significant power savings. The invention takes advantage of a simple switch to transmit packets without waking up the router. Additionally, the technique hides the wake-up latency by continuing to provide packet transmission during the wake-up phase.