Patent classifications
H04L49/9036
Message ordering buffer
The disclosed embodiments, collectively referred to as the “Message Ordering Buffer” or “MOB”, relate to an improved messaging platform, or processing system, which may also be referred to as a message processing architecture or platform, which routes messages from a publisher to a subscriber ensuring related messages, e.g., ordered messages, are conveyed to a single recipient, e.g., processing thread, without unnecessarily committing resources of the architecture to that recipient or otherwise preventing message transmission to other recipients. The disclosed embodiments further include additional features which improve efficient and facilitate deployment in different application environments. The disclosed embodiments may be deployed as a message oriented middleware component directly installed, or accessed as a service, and accessed by publishers and subscribers, as described herein, so as to electronically exchange messages therebetween.
PACKET BUFFERING METHOD, INTEGRATED CIRCUIT SYSTEM, AND STORAGE MEDIUM
This application relates to the field of data communication, and in particular, to a packet buffering method, an integrated circuit system, and a storage medium. The method can improve utilization of the on-chip buffer. The packet buffering method may be applied to a network device. The network device includes a first storage medium and a second storage medium. The first storage medium is a local buffer, and the second storage medium is an external buffer. The method may include: receiving a first packet, and identifying a queue number of the first packet, where the queue number indicates a queue for storing the first packet; querying a queue latency based on the queue number; determining a first latency threshold based on usage of the first storage medium; and buffering the first packet in the first storage medium or the second storage medium based on the queue latency and the first latency threshold.
Technologies for packet forwarding on ingress queue overflow
Technologies for packet forwarding under ingress queue overflow conditions includes a computing device configured to receive a network packet from another computing device, determine whether a global packet buffer of the NIC is full, and determine, in response to a determination that the global packet buffer is full, whether to forward all the global packet buffer entries. The computing device is additionally configured to compare, in response to a determination not to forward all the global packet buffer entries, a selection filter to one or more characteristics of the received network packet and forward, in response to a determination that the selection filter matches the one or more characteristics of the received network packet, the received network packet to a predefined output. Other embodiments are described herein.
User defined data stream for routing data to a data destination based on a data route
Systems and methods are described for customizable data streams in a streaming data processing system. Routing criteria for the customizable data streams are defined by a user, an automated process, or any other process. The routing criteria can be defined using graphical controls. The streaming data processing system uses the routing criteria to determine data that should be used to populate a particular data stream. Further, processing pipelines are customized such that a particular processing pipeline can obtain data from a particular user defined data stream and write data to a particular user defined data stream. Data is routed through the user defined data streams and customized processing pipelines based on a data route. A data route for a set of data may include multiple user defined data streams and multiple processing pipelines. The data route can include a loop of processing pipelines and data streams.
Buffer control for multi-transport architectures
A system and method for automating connection management in a manner that may be transparent to any actively communicating applications operating in a Network on Terminal Architecture (NoTA). An application level entity may access another node by making a request to a high level communication structure via an interface. The high level structure may interact with a lower level structure configured to manage communication by establishing communication with another device via one or more transports. In at least one embodiment, provisions may be made to guard against data being lost when a transport fails, including storing data that is passed from a transport-independent buffer to a transport-specific buffer in case the transport fails. When a failure occurs, the stored data may readily be forwarded for sending using another transport.
COMMUNICATION CONTROL DEVICE, INFORMATION PROCESSING DEVICE, COMMUNICATION CONTROL METHOD, AND INFORMATION PROCESSING METHOD
According to one embodiment, a communication control device includes a transmission control unit and a communication unit. The transmission control unit is configured to control transfer start timing of a first message stored in a queue, based on gate control information. The communication unit is configured to transmit the first message transferred from the transmission control unit in accordance with the transfer start timing. The transfer start timing of the first message is determined based on a transmission cost at a time when a second message, which has been already determined to pass through the gate, is transmitted by the communication unit, and a transfer status of the second message between the transmission control unit and the communication unit.
COMMUNICATION CONTROL DEVICE, INFORMATION PROCESSING DEVICE, COMMUNICATION CONTROL METHOD, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
According to one embodiment, a communication control device includes a transmission control unit and a communication unit. The transmission control unit is configured to control transmission of a message stored in a queue based on transmission control information generated from gate control information in which open or close status of a gate corresponding to each of a plurality of queues is specified. The communication unit is configured to transmit the message under control of the transmission control unit. The transmission control information indicates timing of next event related to the transmission of the message when the transmission of the message is controlled.
SYSTEM AND METHOD FOR FACILITATING DATA-DRIVEN INTELLIGENT NETWORK WITH INGRESS PORT INJECTION LIMITS
Data-driven intelligent networking systems and methods are provided. The system can accommodate dynamic traffic while applying injection limits to different traffic classes at an ingress edge port. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow can be acknowledged after reaching the egress point of the network, and the acknowledgement packets can be sent back to the ingress point of the flow along the same data path. Furthermore, an edge switch can dynamically allocate the ingress port bandwidth among the traffic classes that are active at a given moment.
Efficient use of buffer space in a network switch
Communication apparatus includes multiple ports configured to serve as ingress ports and egress ports for connection to a packet data network. A memory is coupled to the ports and configured to contain both respective input buffers allocated to the ingress ports and a shared buffer holding data packets for transmission in multiple queues via the egress ports. Control logic is configured to monitor an overall occupancy level of the memory, and when a data packet is received through an ingress port having an input buffer that is fully occupied while the overall occupancy level of the memory is below a specified maximum, to allocate additional space in the memory to the input buffer and to accept the received data packet into the additional space.
IMPULSIVE NOISE DETECTION CIRCUIT AND ASSOCIATED METHOD
An impulsive noise detection method is applied to an orthogonal frequency-division multiplexing (OFDM) system to detect whether an input signal includes impulsive noise. The impulsive noise detection method includes receiving the input signal, converting the input signal to a digital input signal, filtering out a data band from the digital input signal to generate a signal under detection, calculating the signal under detection to generate a calculation result, and determining whether the input signal includes the impulsive noise according to the calculation result and a threshold.