Patent classifications
H04L47/527
SHARED RESOURCES FOR MULTIPLE COMMUNICATION TRAFFICS
Systems, methods, and computer-readable media are disclosed for an apparatus coupled to a communication bus, where the apparatus includes a queue and a controller to manage operations of the queue. The queue includes a first space to store a first information for a first traffic type, with a first flow class, and for a first virtual channel of communication between a first communicating entity and a second communicating entity. The queue further includes a second space to store a second information for a second traffic type, with a second flow class, and for a second virtual channel of communication between a third communicating entity and a fourth communicating entity. The first traffic type is different from the second traffic type, the first flow class is different from the second flow class, or the first virtual channel is different from the second virtual channel. Other embodiments may be described and/or claimed.
Online task dispatching and scheduling system and method thereof
The present disclosure relates to an online task dispatching and scheduling system. The system includes an end device; an access point (AP) configured to receive a task from the end device; one or more edge servers configured to receive the task from the AP, the one or more edge servers including a task waiting queue, a processing pool, a task completion queue, and a scheduler, wherein the AP further includes a dispatcher utilizing Online Learning (OL) for determining a real-time state of network conditions and server loads; and the AP selects a target edge server from the one or more edge servers to which the task is to be dispatched; and wherein the scheduler utilizes Deep Reinforcement Learning (DRL) in generating a task scheduling policy for the one or more edge servers.
Technologies for pacing network packet transmissions
Technologies for pacing network packet transmissions include a computing device. The computing device includes a compute engine and a network interface controller (NIC). The NIC is to select a first transmit descriptor from a window of transmit descriptors. The first transmit descriptor is associated with a packet stream. The NIC is also to identify a node of a plurality of nodes of a hierarchical scheduler. The node is associated with the selected first transmit descriptor. The NIC is also to determine whether the identified node has a target amount of transmission credits available and transmit, in response to a determination that the identified node has a target amount of transmission credits available, the network packet associated with the first transmit descriptor to a target computing device.
Systems and methods for queue control based on client-specific protocols
The present disclosure generally relates to controlling access to resources by selectively processing requests stored in a task queue to prioritize certain requests over others, thereby preventing automated scripts from accessing the resources. More specifically, the present disclosure relates to a normalization and prioritization system for controlling access to resources by queuing resource requests based on a client-defined normalization process that uses one or more data sources.
SYSTEM AND METHOD FOR LATENCY CRITICAL QUALITY OF SERVICE USING CONTINUOUS BANDWIDTH CONTROL
A system and method are provided for a bandwidth manager for packetized data designed to arbitrate access between multiple, high bandwidth, ingress channels (sources) to one, lower bandwidth, egress channel (sink). The system calculates which source to grant access to the sink on a word-to-word basis and intentionally corrupts/cuts packets if a source ever loses priority while sending. Each source is associated with a ranking that is recalculated every data word. When a source buffer sends enough words to have its absolute rank value increase above that of another source buffer waiting to send, the system “cuts” the current packet by forcing the sending buffer to stop mid-packet and selects a new, lower ranked, source buffer to send. When there are multiple requesting source buffers with the same rank, the system employs a weighted priority randomized scheduler for buffer selection.
Logical router comprising disaggregated network elements
A logical router includes disaggregated network elements that function as a single router and that are not coupled to a common backplane. The logical router includes spine elements and leaf elements implementing a network fabric with front panel ports being defined by leaf elements. Control plane elements program the spine units and leaf to function a logical router. The control plane may define operating system interfaces mapped to front panel ports of the leaf elements and referenced by tags associated with packets traversing the logical router. Redundancy and checkpoints may be implemented for a route database implemented by the control plane elements. The logical router may include a standalone fabric and may implement label tables that are used to label packets according to egress port and path through the fabric.
Packet scheduling
Various example embodiments for supporting packet scheduling in packet networks are presented. Various example embodiments for supporting packet scheduling in packet networks may be configured to support scheduling-as-a-service. Various example embodiments for supporting packet scheduling in packet networks based on scheduling-as-a-service may be configured to support a virtualized packet scheduler which may be provided as a service over a general-purpose hardware platform, may be instantiated in customer hardware, or the like, as well as various combinations thereof. Various example embodiments for supporting packet scheduling in packet networks may be configured to support scheduling of packets of packet queues based on association of transmission credits with timeslots of a periodic service sequence used to provide service to the packet queues.
Client service transmission method and apparatus
This application discloses a client service transmission method and apparatus. The method may include: receiving a client service, where the client service includes a plurality of data blocks, the client service is corresponding to a counter, and the counter is used to control an output rate of the client service; and sending the plurality of data blocks in a plurality of sending periods, where when a count value of the counter reaches a preset threshold in each sending period, at least one data block of the plurality of data blocks is sent. This technology may be applied to a scenario in which a transmission node transmits a client service.
Shared resources for multiple communication traffics
Systems, methods, and computer-readable media are disclosed for an apparatus coupled to a communication bus, where the apparatus includes a queue and a controller to manage operations of the queue. The queue includes a first space to store a first information for a first traffic type, with a first flow class, and for a first virtual channel of communication between a first communicating entity and a second communicating entity. The queue further includes a second space to store a second information for a second traffic type, with a second flow class, and for a second virtual channel of communication between a third communicating entity and a fourth communicating entity. The first traffic type is different from the second traffic type, the first flow class is different from the second flow class, or the first virtual channel is different from the second virtual channel. Other embodiments may be described and/or claimed.
Transport protocol and interface for efficient data transfer over RDMA fabric
Described herein is a system and method for utilizing a protocol over RDMA network fabric between a first computing node and a second computing node. The protocol identifies a first threshold and a second threshold. A transfer request is received, and, a data size associated with the transfer request is determined. Based up the data size associated with the transfer request, one of at least three transfer modes is selected to perform the transfer request in accordance with the first threshold and the second threshold. Each transfer mode utilizes flow control and at least one RDMA operation. The selected transfer mode is utilized to perform the transfer request.