Patent classifications
H04L12/863
Channel Bonding in Multiple-Wavelength Passive Optical Networks (PONs)
An apparatus comprises: a processor configured to: select a first channel from among a plurality of channels in a network, and generate a first message assigning a first grant corresponding to the first channel; a transmitter coupled to the processor and configured to transmit the first message; and a receiver coupled to the processor and configured to receive a second message on the first channel and in response to the first message. A method comprises: selecting a first channel from among a plurality of channels in a network; generating a first message assigning a first grant corresponding to the first channel; transmitting the first message; and receiving a second message on the first channel in response to the first message
LOW LATENCY MULTIMEDIA STREAMING SYSTEM AND METHOD
In one example, a method for low-latency multimedia stream reception and output in a receiving device is described. Data packets may be extracted from a multimedia stream received over a network. The sequence of independently decodable units associated with the multimedia stream may be decoded. Each independently decodable unit may include one or more data packets. The sequence of decoded units may be stored in an output buffer. Further, flow of the decoded units from the output buffer to an output device may be controlled based on one of (a) a latency associated with the decoded units and (b) a rate of reception of the decoded units by the output buffer and a rate at which the output device is operating. The decoded units may be rendered on the output device.
Hierarchical quality of service scheduling method and device
Provided are an HQoS scheduling method and device. A received uplink data packet is encapsulated and stored in a queue in uplink direction, and an uplink queue scheduling component is requested to perform scheduling. In this manner, HQoS scheduling in the uplink direction is implemented, and a personalized demand of a user can be met by scheduling uplink data, to carry out more flexible function customization. According to the method and device, the data packet may be further sent to a downlink direction after the HQoS scheduling in the uplink direction is completed, and the HQoS scheduling can be performed on the data in the downlink direction, so that the HQoS scheduling is respectively performed on the data in both the uplink direction and the downlink direction; in this manner, the real bidirectional HQoS scheduling control is implemented, and QoS of the user service can be guaranteed in both directions.
Maintaining packet order in a multi processor network device
A plurality of packets are received by a packet processing device, and the packets are distributed among two or more packet processing node elements for processing of the packets. The packets are assigned to respective packet classes, each class corresponding to a group of packets for which an order in which the packets were received is to be preserved. The packets are queued in respective queues corresponding to the assigned packet classes and according to an order in which the packets were received by the packet processing device. The packet processing node elements issue respective instructions indicative of processing actions to be performed with respect to the packets, and indications of at least some of the processing actions are stored. A processing action with respect to a packet is performed when the packet has reached a head of a queue corresponding to the class associated with the packet.
Optical buffer and methods for storing optical signal
An optical buffer and a method for storing an optical signal using the optical buffer, where the optical buffer includes a first waveguide, a first optical delay waveguide loop and a controller. The first waveguide includes a first arm and a second arm, where a first end of the first arm is an input end of the optical buffer, and a second end of the second arm is an output end of the optical buffer. A second end of the first arm connects to a first end of the second arm. The first optical delay waveguide loop connects to the first arm at a first end using a first optical switch, and a second part of the first optical delay waveguide loop connects to the second arm at a second end using a second optical switch. The controller connects to the first optical switch and the second optical switch respectively.
Network traffic event management at the client terminal level
A method of queuing network traffic events on a client terminal. The method comprises monitoring, in run time, a plurality of network traffic events triggered by a plurality of applications executed on a client terminal, extracting a plurality of network traffic event characteristics of each of the plurality of network traffic events, classifying each one of the plurality of network traffic events according to a respective the plurality of network traffic event characteristics, clustering the plurality of network traffic events in a plurality of clusters according to the classifying, and managing an opening a plurality data connections between the client terminal and a network such that the content of each cluster of the plurality of clusters is transmitted in another of the plurality data connections.
Message processing using dynamic load balancing queues in a messaging system
A system, method, and computer-readable medium are disclosed for dynamically managing message queues to balance processing loads in a message-oriented middleware environment. A first source message associated with a first target is received, followed by generating a first dynamic load balancing message queue when a first message queue associated with the first target is determined to not be optimal. The first dynamic load balancing message queue is then associated with the first target, followed by enqueueing the first source message to the first dynamic load balancing message queue for processing by the first target.
Network broadcast traffic filtering
Techniques and solutions for automatically filtering network broadcast traffic are described. For example, network broadcast traffic can be automatically filtered by turning broadcast filtering on and off (e.g., as a continuous strobe pattern that alternates enabling and disabling of broadcast filtering). For example, a computing device (e.g., via a network interface or management controller of the computing device) can automatically enable network broadcast traffic filtering during a first time period (e.g., a four second time period) and disable network broadcast traffic filtering during a second time period (e.g., a one second time period). A computing device can also automatically enable and disable network broadcast traffic filtering according to an on-off pattern (e.g., based on various criteria, such as network queue size, broadcast traffic volume, etc.).
EXPEDITED FABRIC PATHS IN SWITCH FABRICS
The disclosed embodiments provide a system for operating a switch fabric. During operation, the system identifies network traffic for transmission between two access switches in a switch fabric. Next, the system selects a subset of the network traffic for forwarding on an expedited fabric path comprising a physical link between the two access switches that isolated from other physical links in the switch fabric. Next, the system forwards the subset of the network traffic on the expedited fabric path.
DATA RESILIENCY OF BILLING INFORMATION
Managing transaction data during times of low network connectivity by organizing billing information for prioritized processing during periods of higher network connectivity. During low connectivity events, billing information is organized based, at least in part, on a combination of age and revenue to communicate important billing information upon reconnection.