H04L47/521

SYSTEMS AND METHODS FOR PROVIDING LOCKLESS BIMODAL QUEUES FOR SELECTIVE PACKET CAPTURE

In a network system, an application receiving packets can consume one or more packets in two or more stages, where the second and the later stages can selectively consume some but not all of the packets consumed by the preceding stage. Packets are transferred between two consecutive stages, called producer and consumer, via a fixed-size storage. Both the producer and the consumer can access the storage without locking it and, to facilitate selective consumption of the packets by the consumer, the consumer can transition between awake and sleep modes, where the packets are consumed in the awake mode only. The producer may also switch between awake and sleep modes. Lockless access is made possible by controlling the operation of the storage by the producer and the consumer both according to the mode of the consumer, which is communicated via a shared memory location.

Intelligent multiplexing using class-based, multi-dimensioned decision logic for managed networks

Switched digital television programming for video-on-demand and other interactive television services are combined utilizing a class-based, multi-dimensional decision logic to simultaneously optimize video quality and audio uniformity while minimizing latency during user interactions with the system over managed networks such as cable and satellite television networks. A group of user sessions are assigned to a single modulator. The user sessions include data in a plurality of classes, each class having a respective priority. In response to a determination that an aggregate bandwidth of the group of user sessions for a first frame time exceeds a specified budget, bandwidth is allocated for the group of user sessions during the first frame time in accordance with the class priorities. The group of user sessions is multiplexed onto a channel corresponding to the modulator in accordance with the allocated bandwidth and transmitted over a managed network.

Rate adaptation across asynchronous frequency and phase clock domains

A rate adaptation system includes a barrel shift slot register and a rate adaptation register. The barrel shift slot register includes a plurality of slots with one of a valid read request or a dummy read request. A rate adaptation register is configured to sequentially cycle through the slots of the barrel shift register in response to a clock providing valid read requests to a FIFO buffer and to skip provision of valid read requests for clock cycles of the first clock associated with slots that include dummy read requests. The rate adaption register may also receive data blocks from the FIFO buffer and provide those data blocks to another FIFO buffer.

INTEGRATED GATEWAY PLATFORM FOR FULFILLMENT SERVICES

An integrated gateway system configured to perform: receiving online data transmissions from a user computing device of a user; authenticating that a source of the online data transmissions matches the user computing device; transmitting the online data transmissions to the internal gateway system; authenticating credentials of the user as a pre-authorized user; restricting a number of incoming calls using a rate-limiting throttle system; transmitting the online data transmissions to the communication management system; batching the online data transmissions into one or more micro-batches based on one or more rules; transmitting the one or more micro-batches to one or more respective backend services using an events stream system; receiving respective responses transmitted from the one or more respective backend services in response to each one of the one or more micro-batches; performing each respective task of one or more tasks based on the respective responses from the one or more respective backend services. Other embodiments are disclosed.

ENHANCED VIRTUAL CHANNEL SWITCHING
20230262001 · 2023-08-17 ·

A system for facilitating enhanced virtual channel switching in a node of a distributed computing environment is provided. During operation, the system can allocate flow control credits for a first virtual channel to an upstream node in the distributed computing environment. The system can receive, via a message path comprising the upstream node, a message on the first virtual channel based on the allocated flow control credits. The system can then store the message in a queue associated with an input port and determine whether the message is a candidate for changing the first virtual channel at the node based on a mapping rule associated with the input port. If the message is a candidate, the system can associate the message with a second virtual channel indicated in the mapping rule in the queue. Subsequently, the system can send the message from the queue on the second virtual channel.

SYSTEM AND METHOD FOR A TIME-SENSITIVE NETWORK
20220141156 · 2022-05-05 ·

A method for a time sensitive network (TSN) having a network topology is disclosed. The method includes determining a set of flow permutations corresponding to the network topology, computing a respective full schedule corresponding to each flow permutation of the set of flow permutations, determining a respective time to compute the full schedule for each flow permutation of the set of flow permutations, and computing a respective partial schedule for each data flow permutation of the set of flow permutations. The method further includes selecting a data flow permutation of the set of data flow permutations based at least in part on the respective time to compute the full schedule for the selected flow permutation, and saving the selected data flow permutation to a memory.

Integrated traffic profile for indicating congestion and packet drop for congestion avoidance

A system for facilitating an integrated traffic profile for indicating congestion and packet drop is provided. During operation, the system can determine a first traffic profile indicating whether to drop a packet based on the utilization of a queue. The packets from the queue can be forwarded via an egress port reachable via a fabric. The system can also determine a second traffic profile indicating whether to indicate congestion in the packet based on the utilization. The system can then determine a third traffic profile by combining the first and second traffic profiles. The third traffic profile can indicate acceptance at the queue for a subset of packets being selected for dropping based on the utilization. Subsequently, the system can, if the packet is selected for dropping, determine whether to accept the packet at the queue and set a congestion indicator in the packet based on the third traffic profile.

Integrated gateway platform for fulfillment services

An integrated gateway system configured to perform: receiving online data transmissions from a user computing device of a user; authenticating that a source of the online data transmissions matches the user computing device; transmitting the online data transmissions to the internal gateway system; authenticating credentials of the user as a pre-authorized user; restricting a number of incoming calls using a rate-limiting throttle system; transmitting the online data transmissions to the communication management system; batching the online data transmissions into one or more micro-batches based on one or more rules; transmitting the one or more micro-batches to one or more respective backend services using an events stream system; receiving respective responses transmitted from the one or more respective backend services in response to each one of the one or more micro-batches; performing each respective task of one or more tasks based on the respective responses from the one or more respective backend services. Other embodiments are disclosed.

DYNAMICALLY GENERATING AND MANAGING REQUEST QUEUES FOR PROCESSING ELECTRONIC REQUESTS VIA A SHARED PROCESSING INFRASTRUCTURE

Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing dynamic request queues to process electronic requests in a shared infrastructure environment. The disclosed system dynamically generates a plurality of separate request queues for tenant computing systems that utilize a shared processing infrastructure to issue electronic requests for processing by various recipient processors (e.g., one or more processing threads) by separating a primary request queue into the separate requests queues based on the tenant computing systems. The disclosed system also generates a plurality of queue order scores for the request queues based in part on a processing recency of each of the request queues and whether the request queues have pending electronic requests. The disclosed system processes electronic requests in the request queues by selecting a request queue based on the queue order scores and processing a batch of electronic requests utilizing a recipient processor.

Efficient scheduling using adaptive packing mechanism for network apparatuses
11805066 · 2023-10-31 · ·

A scheduler in a network device serves ports with data units from a plurality of queues. The scheduler implements a scheduling algorithm that is normally constrained to releasing data to a port no more frequently than at a default maximum service rate. However, when data units smaller than a certain size are at the heads of one or more data unit queues assigned to a port, the scheduler may temporarily increase the maximum service rate of that port. The increased service rate permits fuller realization of a port's maximum bandwidth when handling smaller data units. In some embodiments, increasing the service rate involves dequeuing more than one small data unit at a time, with the extra data units temporarily stored in a port FIFO. The scheduler adds a pseudo-port to its scheduling sequence to schedule release of data from the port FIFO, with otherwise minimal impact on the scheduling logic.