H04L49/9047

Packet Processing Method and Apparatus, Communications Device, and Switching Circuit
20220329544 · 2022-10-13 ·

A packet processing method includes: a first device receives a packet from a second device; the first device determines a first queue buffer used to store the packet, and determines a first upper limit value of the first queue buffer based on an available value of a first port buffer and an available value of a global buffer, where the global buffer includes at least one port buffer, the first port buffer is one of the at least one port buffer, the first port buffer includes at least one queue buffer, and the first queue buffer is one of the at least one queue buffer. The first device processes the packet based on the first upper limit value of the first queue buffer, an occupation value of the first queue buffer, and a size of the packet.

METHODS FOR DISTRIBUTING SOFTWARE-DETERMINED GLOBAL LOAD INFORMATION

Systems and methods are provided for performing routing in a switch network or fabric. Switches can be configured in a hierarchical topology having a plurality of groups, where switches in a group are connected to one another, and groups are connected to other groups. Routing can be performed by maintaining per-group group load information. A packet can be routed between at least two groups using the per-group group load information to effect a set of routing decisions. The set of routing decisions can be biased towards or away one or more paths.

Non-disruptive implementation of policy configuration changes
11483206 · 2022-10-25 · ·

Techniques for non-disruptive configuration changes are provided. A packet is received at a network device, and the packet is buffered in a common pool shared by a first processing pipeline and a second processing pipeline, where the first processing pipeline corresponds to a first policy and the second processing pipeline corresponds to a second policy. A first copy of a packet descriptor for the packet is queued in a first scheduler based on processing the first copy of the packet descriptor with the first processing pipeline. A second copy of the packet descriptor is queued in a second scheduler associated based on processing the second copy of the packet descriptor with the second processing pipeline. Upon determining that the first policy is currently active on the network device, the first copy of the packet descriptor is dequeued from the first scheduler.

Non-disruptive implementation of policy configuration changes
11483206 · 2022-10-25 · ·

Techniques for non-disruptive configuration changes are provided. A packet is received at a network device, and the packet is buffered in a common pool shared by a first processing pipeline and a second processing pipeline, where the first processing pipeline corresponds to a first policy and the second processing pipeline corresponds to a second policy. A first copy of a packet descriptor for the packet is queued in a first scheduler based on processing the first copy of the packet descriptor with the first processing pipeline. A second copy of the packet descriptor is queued in a second scheduler associated based on processing the second copy of the packet descriptor with the second processing pipeline. Upon determining that the first policy is currently active on the network device, the first copy of the packet descriptor is dequeued from the first scheduler.

DILATED CONVOLUTION USING SYSTOLIC ARRAY
20220292163 · 2022-09-15 ·

In one example, a non-transitory computer readable medium stores instructions that, when executed by one or more hardware processors, cause the one or more hardware processors to: load a first weight data element of an array of weight data elements from a memory into a systolic array; select a subset of input data elements from the memory into the systolic array to perform first computations of a dilated convolution operation, the subset being selected based on a rate of the dilated convolution operation and coordinates of the weight data element within the array of weight data elements; and control the systolic array to perform the first computations based on the first weight data element and the subset to generate first output data elements of an output data array. An example of a compiler that generates the instructions is also provided.

DILATED CONVOLUTION USING SYSTOLIC ARRAY
20220292163 · 2022-09-15 ·

In one example, a non-transitory computer readable medium stores instructions that, when executed by one or more hardware processors, cause the one or more hardware processors to: load a first weight data element of an array of weight data elements from a memory into a systolic array; select a subset of input data elements from the memory into the systolic array to perform first computations of a dilated convolution operation, the subset being selected based on a rate of the dilated convolution operation and coordinates of the weight data element within the array of weight data elements; and control the systolic array to perform the first computations based on the first weight data element and the subset to generate first output data elements of an output data array. An example of a compiler that generates the instructions is also provided.

Technologies for latency based service level agreement management in remote direct memory access networks

Technologies for latency based service level agreement (SLA) management in remote direct memory access (RDMA) networks include multiple compute devices in communication via a network switch. A compute device determines a service level objective (SLO) indicative of a guaranteed maximum latency for a percentage of RDMA requests of an RDMA session. The compute device receives latency data indicative of latency of an RDMA request from a host device. The compute device determines a priority associated with the RDMA request as a function of the SLO and the latency data. The compute device schedules the RDMA request based on the priority. The network switch may allocate queue resources to the RDMA request based on the priority, reclaim the queue resources after the RDMA request is scheduled, and then return the queue resources to a free pool. Other embodiments are described and claimed.

Technologies for latency based service level agreement management in remote direct memory access networks

Technologies for latency based service level agreement (SLA) management in remote direct memory access (RDMA) networks include multiple compute devices in communication via a network switch. A compute device determines a service level objective (SLO) indicative of a guaranteed maximum latency for a percentage of RDMA requests of an RDMA session. The compute device receives latency data indicative of latency of an RDMA request from a host device. The compute device determines a priority associated with the RDMA request as a function of the SLO and the latency data. The compute device schedules the RDMA request based on the priority. The network switch may allocate queue resources to the RDMA request based on the priority, reclaim the queue resources after the RDMA request is scheduled, and then return the queue resources to a free pool. Other embodiments are described and claimed.

MACHINE TO MACHINE COMMUNICATIONS

Broadly speaking, the present techniques relate to a computer implemented method for establishing a secure communication session between a client device and a server resource.

ALGORITHMS FOR USE OF LOAD INFORMATION FROM NEIGHBORING NODES IN ADAPTIVE ROUTING

Systems and methods are provided for passing data amongst a plurality of switches having a plurality of links attached between the plurality of switches. At a switch, a plurality of load signals are received from a plurality of neighboring switches. Each of the plurality of load signals are made up of a set of values indicative of a load at each of the plurality of neighboring switches providing the load signal. Each value within the set of values provides an indication for each link of the plurality of links attached thereto as to whether the link is busy or quiet. Based upon the plurality of load signals, an output link for routing a received packet is selected, and the received packet is routed via the selected output link.