Patent classifications
H04L47/622
Message ordering buffer
The disclosed embodiments, collectively referred to as the “Message Ordering Buffer” or “MOB”, relate to an improved messaging platform, or processing system, which may also be referred to as a message processing architecture or platform, which routes messages from a publisher to a subscriber ensuring related messages, e.g., ordered messages, are conveyed to a single recipient, e.g., processing thread, without unnecessarily committing resources of the architecture to that recipient or otherwise preventing message transmission to other recipients. The disclosed embodiments further include additional features which improve efficient and facilitate deployment in different application environments. The disclosed embodiments may be deployed as a message oriented middleware component directly installed, or accessed as a service, and accessed by publishers and subscribers, as described herein, so as to electronically exchange messages therebetween.
MESSAGING SYSTEM FAILOVER
A device receives a notification indicating a failure of a first server device responsible for a primary message queue that includes messages at a time of the failure. A second server device is responsible for a standby message queue to which the messages are replicated, where a position in the standby message queue and a message time are assigned to each of the replicated messages. The device obtains a record time that identifies the message time of one of the messages that was last obtained from the primary message queue prior to the failure, compares an adjusted record time and the message time of one or more of the messages of the standby message queue to determine a starting position in the standby message queue, and processes messages obtained from the standby message queue beginning at one of the messages assigned to the position that matches the starting position.
System and method for dynamic queue management using queue protocols
A system and method for efficiently processing and managing data stored in a queue. A processing device may process the data stored in the queue. Queue protocols can be applied to the queue to efficiently process and manage data stored in the queue. Queue protocols may facilitate efficient use of processing resources that process the data stored in one or more queues. A queue protocol may include at least a first protocol for facilitating transfer of data in the queue to another queue processed by another processing device or a second protocol for inhibiting transfer of data in the queue to another queue.
METHOD, SYSTEM, AND CIRCUITS FOR RF LOW-LATENCY, MULTIPLE PRIORITY COMMUNICATION
System, method, and circuitry for utilizing a transmit token to create a floating transmission window for multiple priority data in half-duplex communication systems. A first computing device selects audio data and control data to transmit to a second computing device based on a first low priority for audio data relative to a second high priority for control data and on buffer statuses. In response to the first computing device determining that the first computing device possesses a transmit token, the first computing device transmits the selected audio data and the selected control data to the second computing device. The first computing device then transmits the transmit token to the second computing device. The first computing device then waits for the transmit token to be returned before transmitting more data to the second computing device.
LONGEST QUEUE IDENTIFICATION
The present disclosure generally discloses a longest queue identification mechanism. The longest queue identification mechanism, for a set of queues of a buffer, may be configured to identify the longest queue of the set of queues and determine a length of the longest queue of the set of queues. The longest queue identification mechanism may be configured to identify the longest queue of the set of queues using only two variables including a longest queue identifier (LQID) variable for the identity of the longest queue and a longest queue length (LQL) variable for the length of the longest queue. It is noted that the identity of the longest queue of the set of queues may be an estimate of the identity of the longest queue and, similarly, that the length of the longest queue of the set of queues may be an estimate of the length of the longest queue.
MULTI-PATH RDMA TRANSMISSION
In accordance with implementations of the subject matter described herein, there provides a solution for multi-path RDMA transmission. In the solution, at least one packet is generated based on an RDMA message to be transmitted from a first device to a second device. The first device has an RDMA connection with the second device via a plurality of paths. A first packet in the at least one packet includes a plurality of fields, which include information for transmitting the first packet over a first path of the plurality of paths. The at least one packet is transmitted to the second device over the plurality of paths via an RDMA protocol. The first packet is transmitted over the first path. The multi-path RDMA transmission solution according to the subject matter described herein can efficiently utilize rich network paths while maintaining a low memory footprint in a network interface card.
MULTI-STREAM SCHEDULING FOR TIME SENSITIVE NETWORKING
A network interface device for implementing multi-stream scheduling for time sensitive networking includes direct memory access (DMA) circuitry, comprising: descriptor parsing circuitry to read a packet descriptor from a descriptor cache, wherein the packet descriptor includes at least one scheduling control parameter including: a launch time offset, a gate cycle offset, or a reduction ratio; wherein the packet descriptor is associated with a packet stream having a traffic class; and scheduling circuitry to schedule packets from the packet stream for transmission using the at least one scheduling control parameter.
HIGH-SPEED TRACE FUNCTIONALITY IN AN ON-DEMAND SERVICES ENVIRONMENT
Techniques and architectures to provide trace functionality. Trace record data is received from a plurality of client threads executed by one or more processors. The trace record data is stored in a plurality of chunks maintained in an in-use list. The in-use list has a chunk for individual use by the corresponding client threads. Chunks in the in-use list are moved to a completed queue when a chunk in the in-use list is substantially full. A chunk from a free list is placed in the in-use list to replace removed chunks. The chunks from the completed queue are stored in at least one memory device.
PROVIDING QUEUEING IN A LOG STREAMING MESSAGING SYSTEM
Providing queuing in a log streaming system. A state of each of a set of queues of messages is maintained by sending messages to a state topic in the log streaming system. Responsive to a client writing a message to one of the queues, writing the message to a message topic for the queue in the log streaming system. Responsive to the client reading from one of the queues, reading a message from the message topic for the queue and storing property types relating to the availability of the message in the state topic for the queue by sending messages to the state topic referencing the message in the message topic.
NETWORK DEVICE AND A METHOD
A network device for processing a plurality of redundant data streams, the network device configured to: receive a frame from one of the plurality of redundant data streams; compare a sequence number of the frame to a stored sequence number; and if the sequence number of the frame is greater than the stored sequence number: forward the frame to an output terminal of the network device; and update the stored sequence number based on the sequence number of the frame.