Patent classifications
H04L47/524
MULTI-RADIO DEVICE
One example discloses a multi-radio device, including: a controller configured to be coupled to a radio; wherein the controller is configured to receive a request to communicate a signal with an initial communication priority from the radio; wherein the controller includes a priority offset module configured to, adjust the initial communication priority by a first offset; and wherein the controller includes a priority escalator module configured to, adjust the initial communication priority by a second offset.
Multi-radio device
One example discloses a multi-radio device, including: a controller configured to be coupled to a radio; wherein the controller is configured to receive a request to communicate a signal with an initial communication priority from the radio; wherein the controller includes a priority offset module configured to, adjust the initial communication priority by a first offset; and wherein the controller includes a priority escalator module configured to, adjust the initial communication priority by a second offset.
Credit loop deadlock detection and recovery in arbitrary topology networks
A credit loop that produces a deadlock is identified in a network of switches that are interconnected for packet traffic flows therethrough. The identification is carried out by periodically transmitting respective credit loop control messages from the loop-participating switches via their deadlock-suspected egress ports to respective next-hop switches. The CLCMs has switch port-unique identifiers (SPUIDs). The loop is identified when in one of the next-hop switches the SPUID of a received CLCM is equal to the SPUID of a transmitted CLCM thereof. A master switch is selected for resolving the deadlock.
Bandwidth matched scheduler
A computing system uses a memory for storing data, one or more clients for generating network traffic and a communication fabric with network switches. The network switches include centralized storage structures, rather than separate input and output storage structures. The network switches store particular metadata corresponding to received packets in a single, centralized collapsing queue where the age of the packets corresponds to a queue entry position. The payload data of the packets are stored in a separate memory, so the relatively large amount of data is not shifted during the lifetime of the packet in the network switch. The network switches select sparse queue entries in the collapsible queue, deallocate the selected queue entries, and shift remaining allocated queue entries toward a first end of the queue with a delay proportional to the radix of the network switches.
BANDWIDTH MATCHED SCHEDULER
A computing system uses a memory for storing data, one or more clients for generating network traffic and a communication fabric with network switches. The network switches include centralized storage structures, rather than separate input and output storage structures. The network switches store particular metadata corresponding to received packets in a single, centralized collapsing queue where the age of the packets corresponds to a queue entry position. The payload data of the packets are stored in a separate memory, so the relatively large amount of data is not shifted during the lifetime of the packet in the network switch. The network switches select sparse queue entries in the collapsible queue, deallocate the selected queue entries, and shift remaining allocated queue entries toward a first end of the queue with a delay proportional to the radix of the network switches.
Fabric-wide bandth management
In one embodiment, a method includes measuring a rate of traffic received at a leaf node, marking a plurality of packets in the flow as protected at the leaf node to match the rate of traffic with a configured rate of traffic for the flow at the leaf node, and dropping a plurality of non-protected packets at the leaf node when a queue at the leaf node is congested. A minimum bandwidth is provided for the flow based on the configured rate of traffic at the leaf node. The leaf node comprises an ingress node or an egress node connected to a fabric. An apparatus is also disclosed herein.
FABRIC-WIDE BANDWIDTH MANAGEMENT
In one embodiment, a method includes measuring a rate of traffic received at a leaf node, marking a plurality of packets in the flow as protected at the leaf node to match the rate of traffic with a configured rate of traffic for the flow at the leaf node, and dropping a plurality of non-protected packets at the leaf node when a queue at the leaf node is congested. A minimum bandwidth is provided for the flow based on the configured rate of traffic at the leaf node. The leaf node comprises an ingress node or an egress node connected to a fabric. An apparatus is also disclosed herein.
Credit Loop Deadlock Detection and Recovery in Arbitrary Topology Networks
A credit loop that produces a deadlock is identified in a network of switches that are interconnected for packet traffic flows therethrough. The identification is carried out by periodically transmitting respective credit loop control messages from the loop-participating switches via their deadlock-suspected egress ports to respective next-hop switches. The CLCMs has switch port-unique identifiers (SPUIDs). The loop is identified when in one of the next-hop switches the SPUID of a received CLCM is equal to the SPUID of a transmitted CLCM thereof. A master switch is selected for resolving the deadlock.