Patent classifications
H04L47/527
Method for prioritizing network packets at high bandwidth speeds
The embodiments are directed to methods and appliances for scheduling a packet transmission. The methods and appliances can assign received data packets or a representation of data packets to one or more connection nodes of a classification tree having a link node and first and second intermediary nodes associated with the link node via one or more semi-sorted queues, wherein the one or more connection nodes correspond with the first intermediary node. The methods and appliances can process the one or more connection nodes using a credit-based round robin queue. The methods and appliances can authorize the sending of the received data packets based on the processing.
Arbitration of multiple-thousands of flows for convergence enhanced ethernet
In one embodiment, a method includes receiving a plurality of flows, each flow comprising packets of data and assigning a service credit to each of the plurality of flows. In addition, the method includes assigning a weight parameter to each of the plurality of flows, and selecting a flow from a head of a first control queue unless the first control queue is empty or there is indication that the first control queue should be avoided. A flow is selected from a head of a second control queue in response to a determination that the first control queue is empty or there is indication that the first control queue should be avoided. Additionally, the method includes providing a number of units of service to the selected flow. Moreover, the method includes decreasing the selected flow's service credit by an amount corresponding to the number of units of service provided thereto.
Systems and methods for distributing unused bandwidth of metered flows in an envelope based on weights
System and methods of ingress packet metering include receiving a plurality of flows combined to form an envelope with a specific bandwidth, wherein the envelope is defined such that unused bandwidth from higher rank flows is usable by lower rank flows; admitting packets from the plurality of flows based on committed tokens and excess tokens; determining unused tokens in a time interval; and distributing the unused tokens based on configured weights of the plurality of flows within the envelope. The unused tokens can be provided from a lower rank flow to a higher rank flow. The unused tokens can be determined utilizing Two Rate Three Color Marker (trTCM) metering. The receiving can be at a User-Network Interface (UNI), a Network-Network Interface (NNI), or an External NNI (ENNI) port in a node.
Distributed storage system and method for managing storage access bandwidth for multiple clients
System and method for managing storage requests issued from multiple sources in a distributed storage system utilizes different queues at a host computer in the distributed storage system to place different classes of storage requests for access to a virtual storage area network. The storage requests in the queues are processed using a fair scheduling algorithm. For each queue, when the storage requests in the queue exceeds a threshold, a backpressure signal is generated and transmitted to at least one source for a class of storage requests queued in one of the queues corresponding to that backpressure signal to delay issuance of new storage requests of that class of storage requests.
Scheduling Method And Apparatus
This application provides a scheduling method and an apparatus. The method includes: determining, by an application processor, a type of a to-be-sent data packet, and putting, by the application processor, the to-be-sent data packet into a quality of service QoS data flow corresponding to the type of the to-be-sent data packet, where the type of the to-be-sent data packet is a GBR type or a non-GBR type; and scheduling, by the application processor, a to-be-sent data packet in a QoS data flow corresponding to the GBR type to send the to-be-sent data packet to a modem in a terminal in which the application processor is located, and after determining that a data transmission rate requirement of the GBR type is met, scheduling, by the application processor, a to-be-sent data packet in a QoS data flow corresponding to the non-GBR type to send the to-be-sent data packet to the modem.
SYSTEMS AND METHODS FOR QUEUE CONTROL BASED ON CLIENT-SPECIFIC PROTOCOLS
The present disclosure generally relates to controlling access to resources by selectively processing requests stored in a task queue to prioritize certain requests over others, thereby preventing automated scripts from accessing the resources. More specifically, the present disclosure relates to a normalization and prioritization system for controlling access to resources by queuing resource requests based on a client-defined normalization process that uses one or more data sources.
Systems and methods for queue control based on client-specific protocols
The present disclosure generally relates to controlling access to resources by selectively processing requests stored in a task queue to prioritize certain requests over others, thereby preventing automated scripts from accessing the resources. More specifically, the present disclosure relates to a normalization and prioritization system for controlling access to resources by queuing resource requests based on a client-defined normalization process that uses one or more data sources.
VOQ-BASED NETWORK SWITCH ARCHITECTURE USING MULTI-STAGE ARBITRATION FABRIC SCHEDULER
A network switch is capable of supporting cut-through switching and interface channelization with enhanced system performance. The network switch includes a plurality of ingress tiles, each tile including a virtual output queue (VOQ) scheduler operable to submit schedule requests to a fabric scheduler. Data is requested in unit of quantum, which may aggregate multiple packets, and which reduces schedule latency. Each request is associated with a start-of-quantum (SoR) state or a middle-of-quantum (MoR) state to support cut-through. The fabric scheduler performs a multi-stage scheduling process to progressively narrow the selection of requests, including stages of arbitration in virtual output port level, virtual output port group level, tile level, egress port level, and port group level. Each tile receives the grants for its requests and accordingly sends request data to a switch fabric for transmission to the destination egress ports.
Processing packets in an electronic device
A network traffic manager receives, from an ingress port, a cell of a packet destined for an egress port. Upon determining that a number of cells of the packet stored in a buffer queue meets a threshold value, the manager checks whether the ingress port has been assigned a token corresponding to the queue. Upon determining that the ingress port has been assigned the token, the manager determines whether other cells of the packet are stored in the buffer, in response to which the manager stores the received cell in the buffer, and stores linking information for the received cell in a receive context for the packet. When all cells of the packet have been received, the manager copies linking information for the packet cells from the receive context to the buffer queue or a copy generator queue, and releases the token from the ingress port.
Age-based arbitration circuit
This patent application relates generally to an age-based arbitration circuit for use in arbitrating access by a number of data streams to a shared resource managed by a destination (arbiter), in which age-based determinations are performed at the input sources of the data streams in order to designate certain packets as high-priority packets based on packet ages, and the destination expedites processing of the high-priority packets. Among other things, this approach offloads the age-based determinations from the destination, where they otherwise can cause delays in processing packets.