Patent classifications
H04L47/6275
MESSAGE ORDERING BUFFER
The disclosed embodiments, collectively referred to as the “Message Ordering Buffer” or “MOB”, relate to an improved messaging platform, or processing system, which may also be referred to as a message processing architecture or platform, which routes messages from a publisher to a subscriber ensuring related messages, e.g., ordered messages, are conveyed to a single recipient, e.g., processing thread, without unnecessarily committing resources of the architecture to that recipient or otherwise preventing message transmission to other recipients. The disclosed embodiments further include additional features which improve efficient and facilitate deployment in different application environments. The disclosed embodiments may be deployed as a message oriented middleware component directly installed, or accessed as a service, and accessed by publishers and subscribers, as described herein, so as to electronically exchange messages therebetween.
MESSAGE ORDERING BUFFER
The disclosed embodiments, collectively referred to as the “Message Ordering Buffer” or “MOB”, relate to an improved messaging platform, or processing system, which may also be referred to as a message processing architecture or platform, which routes messages from a publisher to a subscriber ensuring related messages, e.g., ordered messages, are conveyed to a single recipient, e.g., processing thread, without unnecessarily committing resources of the architecture to that recipient or otherwise preventing message transmission to other recipients. The disclosed embodiments further include additional features which improve efficient and facilitate deployment in different application environments. The disclosed embodiments may be deployed as a message oriented middleware component directly installed, or accessed as a service, and accessed by publishers and subscribers, as described herein, so as to electronically exchange messages therebetween.
TS operation for RTA session management
A wireless local area network (WLAN) station and protocol configured to support communicating real-time application (RTA) packets that are sensitive to communication delays as well as non-real time packets over a network supporting within traffic stream (TS) operations in which real time application (RTA) traffic and non-RTA traffic coexist. Stations can request establishing a traffic stream from neighboring stations, which can accept or deny the TS for the RTA stream. Additional information can be passed in requesting the stream or by the responder for denying the stream.
TS operation for RTA session management
A wireless local area network (WLAN) station and protocol configured to support communicating real-time application (RTA) packets that are sensitive to communication delays as well as non-real time packets over a network supporting within traffic stream (TS) operations in which real time application (RTA) traffic and non-RTA traffic coexist. Stations can request establishing a traffic stream from neighboring stations, which can accept or deny the TS for the RTA stream. Additional information can be passed in requesting the stream or by the responder for denying the stream.
DATA PROCESSING METHOD, DATA PROCESSING APPARATUS, ELECTRONIC DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT
Provided in the present disclosure are a data processing method and apparatus, and an electronic device, the method includes: determining a plurality of candidate data pieces, where the candidate data pieces are provided from corresponding data sources; and determining a target data piece based on priorities of the data sources corresponding to the plurality of candidate data pieces in a current cycle, wherein a same data source has different priorities in different processing cycles, and priority sequence numbers of a same data source in different processing cycles satisfy a nonlinear relationship.
DATA PROCESSING METHOD, DATA PROCESSING APPARATUS, ELECTRONIC DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT
Provided in the present disclosure are a data processing method and apparatus, and an electronic device, the method includes: determining a plurality of candidate data pieces, where the candidate data pieces are provided from corresponding data sources; and determining a target data piece based on priorities of the data sources corresponding to the plurality of candidate data pieces in a current cycle, wherein a same data source has different priorities in different processing cycles, and priority sequence numbers of a same data source in different processing cycles satisfy a nonlinear relationship.
SYSTEM FOR QUEUING FLOWS TO CHANNELS
A system for queuing flows to channels.
Low-Latency Delivery of In-Band Telemetry Data
A network device includes processing circuitry and a plurality of ports. The ports connect to a communication network. The processing circuitry is configured to receive, via an input port, data packets and probe packets that are addressed to a common output port, to store the data packets in a first queue and the probe packets in a second queue, both the first queue and the second queue are served by the output port, to produce telemetry data indicative of a state of the network device, based on a processing path that the data packets traverse within the network device, to schedule transmission of the data packets from the first queue at a first priority, and schedule transmission of the probe packets from the second queue at a second priority higher than the first priority, and to modify the scheduled probe packets so as to carry the telemetry data.
Throttling queue for a request scheduling and processing system
Various methods and systems for implementing request scheduling and processing in a multi-tenant distributed computing environment are provided. Requests to utilize system resources in the distributed computing environment are stored in account queues corresponding to tenant accounts. If storing a request in an account queue would exceed a throttling threshold such as a limit on the number of requests stored per account, the request is dropped to a throttling queue. A scheduler prioritizes processing requests stored in the processing queue before processing requests stored in the account queues. The account queues can be drained using dominant resource scheduling. In some embodiments, a request is not picked up from an account queue if processing the request would exceed a predefined hard limit on system resource utilization for the corresponding tenant account. In some embodiments, the hard limit is defined as a percentage of threads the system has to process requests.
Throttling queue for a request scheduling and processing system
Various methods and systems for implementing request scheduling and processing in a multi-tenant distributed computing environment are provided. Requests to utilize system resources in the distributed computing environment are stored in account queues corresponding to tenant accounts. If storing a request in an account queue would exceed a throttling threshold such as a limit on the number of requests stored per account, the request is dropped to a throttling queue. A scheduler prioritizes processing requests stored in the processing queue before processing requests stored in the account queues. The account queues can be drained using dominant resource scheduling. In some embodiments, a request is not picked up from an account queue if processing the request would exceed a predefined hard limit on system resource utilization for the corresponding tenant account. In some embodiments, the hard limit is defined as a percentage of threads the system has to process requests.