H04L49/9047

SYSTEM AND METHOD FOR FACILITATING OPERATION MANAGEMENT IN A NETWORK INTERFACE CONTROLLER (NIC) FOR ACCELERATORS

A network interface controller (NIC) capable of efficient operation management for host accelerators is provided. The NIC can be equipped with a host interface and triggering logic block. During operation, the host interface can couple the NIC to a host device. The triggering logic block can obtain, via the host interface from the host device, an operation associated with an accelerator of the host device. The triggering logic block can determine whether a triggering condition has been satisfied for the operation based on an indicator received from the accelerator. If the triggering condition has been satisfied, the triggering logic block can obtain a piece of data generated from the accelerator from a memory location and execute the operation using the piece of data.

SYSTEMS AND METHODS FOR ADAPTIVE ROUTING IN THE PRESENCE OF PERSISTENT FLOWS
20220200900 · 2022-06-23 ·

System and methods are described for providing adaptive routing in the presence of persistent flows. Switches in a fabric have the capability to establish flow channels. Switches can adaptively route flows, while monitoring transmission characteristics of the flows channels to identify whether any flows are experiencing congestion towards a destination. In response to detecting congestion, it can be further determined whether the flow is related to a source of congestion, or alternative the flow is a victim of congestion. Flows that are a source of congestion have their routing constrained to prevent congestion from propagating. For example, new packets of a flow that is a source of congestion may be forced to only take the path of the data transmission that detected said congestion (preventing congestion from spreading). Alternatively, victims of congestion do not have their routing constrained, and packets can take any path as permitted by adaptive routing.

SYSTEM AND METHOD FOR FACILITATING DATA-DRIVEN INTELLIGENT NETWORK
20220200913 · 2022-06-23 ·

Data-driven intelligent networking systems and methods are provided. The system can accommodate dynamic traffic with fast, effective congestion control. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow can be acknowledged after reaching the egress point of the network, and the acknowledgement packets can be sent back to the ingress point of the flow along the same data path. As a result, each switch can obtain state information of each flow and perform flow control on a per-flow basis.

SYSTEM AND METHOD FOR FACILITATING EFFICIENT PACKET INJECTION INTO AN OUTPUT BUFFER IN A NETWORK INTERFACE CONTROLLER (NIC)

A network interface controller (NIC) capable of efficient packet injection into an output buffer is provided. The NIC can be equipped with an output buffer, a plurality of injectors, a prioritization logic block, and a selection logic block. The plurality of injectors can share the output buffer. The prioritization logic block can determine a priority associated with a respective injector based on a high watermark and a low watermark associated with the injector. The selection logic block can then determine, from the plurality of injectors, a subset of injectors associated with a buffer class and determine whether the subset of injectors includes a high-priority injector. Upon identifying a high-priority injector in the subset of injectors, the selection logic block can select the high-priority injector for injecting a packet in the output buffer.

DYNAMIC BUFFER MANAGEMENT IN DATA-DRIVEN INTELLIGENT NETWORK

Systems and methods for dynamic buffer management in switches that facilitate a data-driven intelligent networking system are provided. The system can accommodate dynamic traffic with fast, effective congestion control while providing efficient use of internal input buffer space.

SYSTEM AND METHOD FOR PERFORMING ON-THE-FLY REDUCTION IN A NETWORK
20220191128 · 2022-06-16 ·

A switch capable of on-the-fly reduction in a network is provided. The switch is equipped with a reduction engine that can be dynamically configured to perform on-the-fly reduction. As a result, the network can facilitate an efficient and scalable environment for high performance computing.

METHOD AND SYSTEM FOR PROVIDING NETWORK INGRESS FAIRNESS BETWEEN APPLICATIONS
20220191127 · 2022-06-16 ·

Methods and systems are provided to facilitate network ingress fairness between applications. At an ingress port of a network, the applications providing data communications are reviewed so that and arbitration process can be used to fairly allocate bandwidth at that ingress port. In a typical process, the bandwidth is allocated based upon the number of flow channels, irrespective of the source and characteristics of those flow channels. At the ingress port, an examination of the application providing the data communication will allow for a more appropriate allocation of input bandwidth.

Packet processing system, method and device having reduced static power consumption
11297012 · 2022-04-05 · ·

A buffer logic unit of a packet processing device including a power gate controller. The buffer logic unit for organizing and/or allocating available pages to packets for storing the packet data based on which of a plurality of separately accessible physical memories that pages are associated with. As a result, the power gate controller is able to more efficiently cut off power from one or more of the physical memories.

Packet processing system, method and device having reduced static power consumption
11297012 · 2022-04-05 · ·

A buffer logic unit of a packet processing device including a power gate controller. The buffer logic unit for organizing and/or allocating available pages to packets for storing the packet data based on which of a plurality of separately accessible physical memories that pages are associated with. As a result, the power gate controller is able to more efficiently cut off power from one or more of the physical memories.

Technologies for jitter-adaptive low-latency, low power data streaming between device components

Technologies for low-latency data streaming include a computing device having a processor that includes a producer and a consumer. The producer generates a data item, and in a local buffer producer mode adds the data item to a local buffer, and in a remote buffer producer mode adds the data item to a remote buffer. When the local buffer is full, the producer switches to the remote buffer producer mode, and when the remote buffer is below a predetermined low threshold, the producer switches to the local buffer producer mode. The consumer reads the data item from the local buffer while operating in a local buffer consumer mode and reads the data item from the remote buffer while operating in a remote buffer consumer mode. When the local buffer is above a predetermined high threshold, the consumer may switch to a catch-up operating mode. Other embodiments are described and claimed.