Patent classifications
H04L49/3045
ADAPTIVE BUFFERING IN A DISTRIBUTED SYSTEM WITH LATENCY / ADAPTIVE TAIL DROP
A network device includes a switching system for directing packets between ingress ports and egress ports of the network device. The network device also includes a switching system manager that makes an identification of a state change of a virtual output queue of the switching system; and performs an action set, based on the state change, to modify a latency of the virtual output queue to meet a predetermined latency in response to the identification.
Maintaining bandwidth utilization in the presence of packet drops
Examples describe a manner of scheduling packet segment fetches at a rate that is based on one or more of: a packet drop indication, packet drop rate, incast level, operation of queues in SAF or VCT mode, or fabric congestion level. Headers of packets can be fetched faster than payload or body portions of packets and processed prior to queueing of all body portions. In the event a header is identified as droppable, fetching of the associated body portions can be halted and any body portion that is queued can be discarded. Fetch overspeed can be applied for packet headers or body portions associated with packet headers that are approved for egress.
COMBINED INPUT AND OUTPUT QUEUE FOR PACKET FORWARDING IN NETWORK DEVICES
An apparatus for switching network traffic includes an ingress packet forwarding engine and an egress packet forwarding engine. The ingress packet forwarding engine is configured to determine, in response to receiving a network packet, an egress packet forwarding engine for outputting the network packet and enqueue the network packet in a virtual output queue. The egress packet forwarding engine is configured to output, in response to a first scheduling event and to the ingress packet forwarding engine, information indicating the network packet in the virtual output queue and that the network packet is to be enqueued at an output queue for an output port of the egress packet forwarding engine. The ingress packet forwarding engine is further configured to dequeue, in response to receiving the information, the network packet from the virtual output queue and enqueue the network packet to the output queue.
Fabric vectors for deep learning acceleration
Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Instructions executed by the compute element include operand specifiers, some specifying a data structure register storing a data structure descriptor describing an operand as a fabric vector or a memory vector. The data structure descriptor further describes various attributes of the fabric vector: length, microthreading eligibility, number of data elements to receive, transmit, and/or process in parallel, virtual channel and task identification information, whether to terminate upon receiving a control wavelet, and whether to mark an outgoing wavelet a control wavelet.
Combined input and output queue for packet forwarding in network devices
An apparatus for switching network traffic includes an ingress packet forwarding engine and an egress packet forwarding engine. The ingress packet forwarding engine is configured to determine, in response to receiving a network packet, an egress packet forwarding engine for outputting the network packet and enqueue the network packet in a virtual output queue. The egress packet forwarding engine is configured to output, in response to a first scheduling event and to the ingress packet forwarding engine, information indicating the network packet in the virtual output queue and that the network packet is to be enqueued at an output queue for an output port of the egress packet forwarding engine. The ingress packet forwarding engine is further configured to dequeue, in response to receiving the information, the network packet from the virtual output queue and enqueue the network packet to the output queue.
Congestion Control Measures in Multi-Host Network Adapter
A network adapter includes a host interface, a network interface, a memory and packet processing circuitry. The memory holds a shared buffer and multiple queues allocated to the multiple host processors. The packet processing circuitry is configured to receive from the network interface data packets destined to the host processors, to store payloads of at least some of the data packets in the shared buffer, to distribute headers of at least some of the data packets to the queues, to serve the data packets to the host processors by applying scheduling among the queues, to detect congestion in the data packets destined to a given host processor among the host processors, and, in response to the detected congestion, to mitigate the congestion in the data packets destined to the given host processor, while retaining uninterrupted processing of the data packets destined to the other host processors.
Reprogramming multicast replication using real-time buffer feedback
Methods and systems are described for programming a substitution of ingress replication buffering for egress replication buffering after identifying egress buffer errors (such as overflow) for multicast traffic. A network element is configured to identify which ports drop packets by monitoring egress buffers and/or multicast traffic in real time. A hardware forwarding engine provides feedback to a control plane processor of the network element to adapt and selectively reprogram multicast ingress replication, temporarily, for certain egress ports that may have, e.g., egress buffer errors or risk of issues due to high network traffic. Using virtual output queues in ingress buffers may reduce risk of egress port congestion, as egress buffers have more limited resources than ingress buffers; however, relying solely on ingress replication for multicast traffic may hinder unicast traffic. Ingress buffer replication of multicast traffic may be used selectively and temporarily.
Adaptive buffering in a distributed system with latency / adaptive tail drop
A network device includes a switching system for directing packets between ingress ports and egress ports of the network device. The network device also includes a switching system manager that makes an identification of a state change of a virtual output queue of the switching system; and performs an action set, based on the state change, to modify a latency of the virtual output queue to meet a predetermined latency in response to the identification.
Method and system for virtual channel remapping
A virtual channel (VC) allocation system is provided. During operation, the system can maintain, at an ingress port of a switch, a set of counters. A respective counter can indicate a number of data units queued at a corresponding egress port for an ingress VC. A data unit can indicate a minimum number of bits needed to form a packet. The system can maintain, at an egress port, an ingress VC indicator indicating that a packet in an egress buffer for an egress VC corresponds to the ingress VC. Upon sending the packet, the system can update a counter based on the ingress VC indicator. The counter can be associated with the egress buffer and the ingress VC. The system can then issue, to a sender device, credits associated with the ingress VC based on a minimum number of available data units indicated by the set of counters.
Adaptive Buffering in a Distributed System with Latency/Adaptive Tail Drop
A network device includes a switching system for directing packets between ingress ports and egress ports of the network device. The network device also includes a switching system manager that makes an identification of a state change of a virtual output queue of the switching system; and performs an action set, based on the state change, to modify a latency of the virtual output queue to meet a predetermined latency in response to the identification.