H04L49/9036

Expandable queue
11558309 · 2023-01-17 · ·

A network device includes packet processing circuitry and queue management circuitry. The packet processing circuitry is configured to transmit and receive packets to and from a network. The queue management circuitry is configured to store, in a memory, a queue for queuing data relating to processing of the packets, the queue including a primary buffer and an overflow buffer, to choose between a normal mode and an overflow mode based on a defined condition, to queue the data only in the primary buffer when operating in the normal mode, and, when operating in the overflow mode, to queue the data in a concatenation of the primary buffer and the overflow buffer.

PACKET PROCESSING OF STREAMING CONTENT IN A COMMUNICATIONS NETWORK

Aspects of present disclosure include devices within a transmission path of streamed content forwarding received data packets of the stream to the next device or “hop” in the path prior to buffering the data packet at the device. In this method, typical buffering of the data stream may therefore occur at the destination device for presentation at a consuming device, while the devices along the transmission path may transmit a received packet before buffering. Further, devices along the path may also buffer the content stream after forwarding to fill subsequent requests for dropped data packets of the content stream. Also, in response to receiving the request for the content stream, a device may first transmit a portion of the contents of the gateway buffer to the requesting device to fill a respective buffer at the receiving device.

Expandable Queue
20230010161 · 2023-01-12 ·

A network device includes packet processing circuitry and queue management circuitry. The packet processing circuitry is configured to transmit and receive packets to and from a network. The queue management circuitry is configured to store, in a memory, a queue for queuing data relating to processing of the packets, the queue including a primary buffer and an overflow buffer, to choose between a normal mode and an overflow mode based on a defined condition, to queue the data only in the primary buffer when operating in the normal mode, and, when operating in the overflow mode, to queue the data in a concatenation of the primary buffer and the overflow buffer.

MESSAGE ORDERING BUFFER

The disclosed embodiments, collectively referred to as the “Message Ordering Buffer” or “MOB”, relate to an improved messaging platform, or processing system, which may also be referred to as a message processing architecture or platform, which routes messages from a publisher to a subscriber ensuring related messages, e.g., ordered messages, are conveyed to a single recipient, e.g., processing thread, without unnecessarily committing resources of the architecture to that recipient or otherwise preventing message transmission to other recipients. The disclosed embodiments further include additional features which improve efficient and facilitate deployment in different application environments. The disclosed embodiments may be deployed as a message oriented middleware component directly installed, or accessed as a service, and accessed by publishers and subscribers, as described herein, so as to electronically exchange messages therebetween.

Buffer management method and apparatus

A memory management method includes: determining that available storage space of a first memory in a network device is less than a first threshold, where the first threshold is greater than 0 and the first memory stores a first packet queue; and deleting at least one packet at the tail of the first packet queue from the first memory based on the available storage space of the first memory being less than the first threshold. When the available storage space of the first memory is less than the first threshold, a packet queue, namely, the first packet queue, is selected and a packet at the tail of the packet queue is deleted from the first memory.

METHOD AND SYSTEM FOR FACILITATING LOSSY DROPPING AND ECN MARKING
20230046350 · 2023-02-16 ·

Methods and systems are provided for performing lossy dropping and ECN marking in a flow-based network. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow are acknowledged after reaching the egress point of the network, and the acknowledgement packets are sent back to the ingress point of the flow along the same data path. As a result, each switch can obtain state information of each flow and perform per-flow packet dropping and ECN marking.

Reading messages in a shared memory architecture for a vehicle

A method of communicating messages between modules in a system on a vehicle, each module configured as a publisher node and/or subscriber node, the publisher nodes and the subscriber nodes collectively forming a plurality of nodes that communicate in the operation of the vehicle. One method includes communicating, by a subscriber node, with a registry for information to determine if a new message associated with a first topic is available for reading, determining, by each subscriber node, if a new message associated with the first topic is available for reading, in response to determining a new message associated with the first topic is available for reading, reading from the registry location information indicating where the first message is stored in a first message buffer, and reading, by each subscriber node the first message from the first message buffer using the location information.

Packet processing of streaming content in a communications network

Aspects of present disclosure include devices within a transmission path of streamed content forwarding received data packets of the stream to the next device or “hop” in the path prior to buffering the data packet at the device. In this method, typical buffering of the data stream may therefore occur at the destination device for presentation at a consuming device, while the devices along the transmission path may transmit a received packet before buffering. Further, devices along the path may also buffer the content stream after forwarding to fill subsequent requests for dropped data packets of the content stream. Also, in response to receiving the request for the content stream, a device may first transmit a portion of the contents of the gateway buffer to the requesting device to fill a respective buffer at the receiving device.

Forwarding Information Obtaining Method and Apparatus
20220353198 · 2022-11-03 ·

A forwarding information obtaining device and method, the method including obtaining, by a first device in response to congestion in a first queue, a service parameter identifier of a first packet buffered in the first queue, where the service parameter identifier indicates a parameter used to forward the first packet, and performing, by the first device, a first operation based on the service parameter identifier, where the first operation is performed to relieve the congestion of the first queue.

User interface for customizing data streams

Systems and methods are described for customizable data streams in a streaming data processing system. Routing criteria for the customizable data streams are defined by a user, an automated process, or any other process. The routing criteria can be defined using graphical controls. The streaming data processing system uses the routing criteria to determine data that should be used to populate a particular data stream. Further, processing pipelines are customized such that a particular processing pipeline can obtain data from a particular user defined data stream and write data to a particular user defined data stream. Data is routed through the user defined data streams and customized processing pipelines based on a data route. A data route for a set of data may include multiple user defined data streams and multiple processing pipelines. The data route can include a loop of processing pipelines and data streams.