Patent classifications
H04L47/521
Dynamically generating and managing request queues for processing electronic requests via a shared processing infrastructure
Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing dynamic request queues to process electronic requests in a shared infrastructure environment. The disclosed system dynamically generates a plurality of separate request queues for tenant computing systems that utilize a shared processing infrastructure to issue electronic requests for processing by various recipient processors (e.g., one or more processing threads) by separating a primary request queue into the separate requests queues based on the tenant computing systems. The disclosed system also generates a plurality of queue order scores for the request queues based in part on a processing recency of each of the request queues and whether the request queues have pending electronic requests. The disclosed system processes electronic requests in the request queues by selecting a request queue based on the queue order scores and processing a batch of electronic requests utilizing a recipient processor.
SYSTEM AND METHOD FOR A TIME-SENSITIVE NETWORK
A method for a time sensitive network (TSN) having a network topology is disclosed. The method includes determining a set of data flow permutations corresponding to the network topology, computing a respective full schedule corresponding to each data flow permutation of the set of data flow permutations, determining a respective time to compute the full schedule for each flow permutation of the set of data flow permutations, and computing a respective partial schedule for each data flow permutation of the set of data flow permutations. The method further includes selecting a data flow permutation of the set of data flow permutations based at least in part on the respective time to compute the full schedule for the selected flow permutation, and saving the selected data flow permutation to a memory.
Method, Apparatus, and System for Implementing Congestion Control
The present application disclose a method for implementing congestion control. The method includes: obtaining congestion control information of a first network device, where the congestion control information includes a total bandwidth of a first egress port of the first network device and a quantity of active flows corresponding to a first queue of the first egress port; and determining a sending rate of a first data flow in the active flows based on the congestion control information, where the sending rate is positively related to the total bandwidth and negatively related to the quantity of active flows, and the sending rate is used by a sending device of the first data flow to send the first data flow.
System and method for a time-sensitive network
A method for a time sensitive network (TSN) having a network topology is disclosed. The method includes determining a set of flow permutations corresponding to the network topology, computing a respective full schedule corresponding to each flow permutation of the set of flow permutations, determining a respective time to compute the full schedule for each flow permutation of the set of flow permutations, and computing a respective partial schedule for each data flow permutation of the set of flow permutations. The method further includes selecting a data flow permutation of the set of data flow permutations based at least in part on the respective time to compute the full schedule for the selected flow permutation, and saving the selected data flow permutation to a memory.
Flow-based management of shared buffer resources
An apparatus for controlling a Shared Buffer (SB), the apparatus including an interface and a SB controller. The interface is configured to access flow-based data counts and admission states. The SB controller is configured to perform flow-based accounting of packets received by a network device coupled to a communication network, for producing flow-based data counts, each flow-based data count associated with one or more respective flows, and to generate admission states based at least on the flow-based data counts, each admission state being generated from one or more respective flow-based data counts.
Intelligent multiplexing using class-based, multi-dimensioned decision logic for managed networks
A server system determines, for a group of user sessions assigned to a single modulator, that an aggregate bandwidth for a first frame time exceeds a specified budget for the modulator. The user sessions comprise data in a plurality of classes, each class having a respective priority. In response to a determination that the aggregate bandwidth exceeds a specified budget, the server system allocates a portion of the aggregate bandwidth, including allocating a first portion of the data for a first user session in the group of user sessions and allocating a second portion of the data for a second user session in the group of user sessions, where both the first portion and the second portion are allocated in accordance with the class priorities. The server system transmits the allocated portions of the data for the group of user sessions through the modulator during the first frame time.
Packet Buffer Spill-Over in Network Devices
Packets to be transmitted from a network device are buffered in queues in a first packet memory. In response to detecting congestion in a queue in the first packet memory, groups of multiple packets are transferred from the first packet memory to a second packet memory, the second packet memory configured to buffer a portion of traffic bandwidth supported by the network device. Prior to transmission of the packets among the one or more groups of multiple packets from the network device, packets among the one or more groups of multiple packets are transferred from the second packet memory back to the first packet memory. The packets transferred from the second packet memory back to the first packet memory are retrieved from the first packet memory and are forwarded to one or more network ports for transmission of the packets from the network device.
Methods and apparatus for isochronous data delivery within a network
Methods and apparatus for efficiently servicing isochronous streams (such as media data streams) associated with a network. In one embodiment, an Isochronous Cycle Manager (ICM), receives multiple independent streams of packets that include isochronous packets arriving according to different time bases (e.g., where each stream has a different time base). The packets are sorted by the ICM into a buffering mechanism according to their required presentation time. Additionally the ICM calculates a launch time for each packet. The NIC transmits the packets from the queue according to an access scheme, such as a time division multiplexed (TDM) scheme where each of a plurality of cycles is subdivided into time slots. During appropriate time slots, the NIC transmits the packets in chronological order, as read out of the buffering mechanism.
Systems and methods for providing lockless bimodal queues for selective packet capture
In a network system, an application receiving packets can consume one or more packets in two or more stages, where the second and the later stages can selectively consume some but not all of the packets consumed by the preceding stage. Packets are transferred between two consecutive stages, called producer and consumer, via a fixed-size storage. Both the producer and the consumer can access the storage without locking it and, to facilitate selective consumption of the packets by the consumer, the consumer can transition between awake and sleep modes, where the packets are consumed in the awake mode only. The producer may also switch between awake and sleep modes. Lockless access is made possible by controlling the operation of the storage by the producer and the consumer both according to the mode of the consumer, which is communicated via a shared memory location.
Allocation of virtual queues of a network forwarding element
In a method for allocating physical queues of a network forwarding element, a request is received at the network forwarding element, the network forwarding element including a plurality of physical queues, where each physical queue of the plurality of physical queues has a fixed bandwidth, the request identifying an allocation of a plurality of virtual queues at the network forwarding element. Based at least in part on the request, a configuration of the plurality of physical queues to the plurality of virtual queues is determined. The plurality of physical queues is configured according to the configuration, wherein the configuring includes allocating at least two physical queues to a virtual queue.