H04L47/625

System and methods to filter out noisy application signatures to improve precision of first packet classification
11582158 · 2023-02-14 ·

The systems and methods discussed herein provide for classifying CDN connections to the originating application on the first packet. In some implementations, the system identifies application connections established within a predetermined time period prior to the CDN connection and increments a value associated with these connections. The system classifies the CDN connection as corresponding to the application connection with the highest associated value, allowing routing of network traffic to take advantage of QoS benefits and reduce the need for deep packet inspection.

Communication control apparatus and communication control method

A communication control apparatus includes an observed data rate acquiring unit configured to acquire observed data rates indicating input data rates of queues, the observed data rates being observed in a layer 2 switch, a threshold value storage unit configured to store predetermined threshold values of the queues, a shaping rate determination unit configured to determine a shaping rate of each queue based on both observed data based on an observed data rate of the observed data rates acquired by the observed data rate acquiring unit and a threshold value of the predetermined threshold values stored in the threshold value storage unit, and a shaping rate setting unit configured to set, in the layer 2 switch, the shaping rate of each queue determined by the shaping rate determination unit.

System for early system resource constraint detection and recovery

A system for optimizing network traffic is described. The system includes a quality of service (QoS) engine configured to acquire information regarding a plurality of data packets comprising a plurality of data packet flows operating over a plurality of links. The QoS engine can be further configured to determine a flow priority to the plurality of data packets flows, and to determine TCP characteristics for the plurality of data packet flows. The system further includes a TCP controller configured to acquire the flow priority to the plurality of data packets from the QoS engine. The TCP controller can be configured to obtain queue information associated with the plurality of data packets, and adjust a receive window size based on the flow priority and the queue information.

Expandable queue
11558309 · 2023-01-17 · ·

A network device includes packet processing circuitry and queue management circuitry. The packet processing circuitry is configured to transmit and receive packets to and from a network. The queue management circuitry is configured to store, in a memory, a queue for queuing data relating to processing of the packets, the queue including a primary buffer and an overflow buffer, to choose between a normal mode and an overflow mode based on a defined condition, to queue the data only in the primary buffer when operating in the normal mode, and, when operating in the overflow mode, to queue the data in a concatenation of the primary buffer and the overflow buffer.

Data transmission method and apparatus

This application provides a data transmission method and apparatus. The method includes: determining a first sending rate based on a network performance objective of first data and a network status of a first transmission control protocol (TCP) connection of a transport layer protocol, where the first TCP connection is used to send the first data; and sending the first data based on the first sending rate. In this way, network congestion control is more flexible, and TCP-based data transmission efficiency is improved.

TRAFFIC SHAPING METHOD AND APPARATUS
20230239248 · 2023-07-27 ·

This application provides a traffic shaping method and apparatus. The method includes: A packet marking apparatus receives a first packet; the packet marking apparatus determines an enqueuing queue of the first packet; and the packet marking apparatus marks a queue identifier of the first packet as a queue identifier of the enqueuing queue of the first packet, and then sends the queue identifier of the first packet to a packet output apparatus, where the packet output apparatus is configured to send, based on the queue identifier of the first packet, the first packet to a corresponding queue for outputting. Therefore, packet output time after traffic shaping can be determined.

DYNAMIC LOAD BALANCING FOR MULTI-CORE COMPUTING ENVIRONMENTS

Methods, apparatus, systems, and articles of manufacture are disclosed for dynamic load balancing for multi-core computing environments. An example apparatus includes a first and a plurality of second cores of a processor, and circuitry in a die of the processor separate from the first and the second cores, the circuitry to enqueue identifiers in one or more queues in the circuitry associated with respective ones of data packets of a packet flow, allocate one or more of the second cores to dequeue first ones of the identifiers in response to a throughput parameter of the first core not satisfying a throughput threshold to cause the one or more of the second cores to execute one or more operations on first ones of the data packets, and provide the first ones to one or more data consumers to distribute the first data packets.

Multi-path packet descriptor delivery scheme

Examples describe use of multiple meta-data delivery schemes to provide tags that describe packets to an egress port group. A tag, that is smaller than a packet, can be associated with a packet. The tag can be stored in a memory, as a group with other tags, and the tag can be delivered to a queue associated with an egress port. Packets received at an ingress port can be as non-interleaved to reduce underrun and providing cut-through to an egress port. A shared memory can be allocated to store packets received at a single ingress port or shared to store packets from multiple ingress ports.

Selectively bypassing a routing queue in a routing device in a fifth generation (5G) or other next generation network

The technologies described herein are generally directed toward shedding processing loads associated with route updates. According to an embodiment, a system can comprise a processor and a memory that can enable operations facilitating performance of operations including facilitating receiving, from a second routing device via a network, a communication. The operations can further comprise, in response to a queueing delay being determined to be less than a threshold, queueing, in the queue, the communication for a third routing device selected according to a first selection process as being on a route to a destination routing device for the communication. Further, operations to, in response to the queueing delay of the queue being determined to be equal to or above the threshold, transmit the communication to a fourth routing device, with the fourth routing device being selected according to a second selection process different than the first selection process.

TIME INTERLEAVER, TIME DEINTERLEAVER, TIME INTERLEAVING METHOD, AND TIME DEINTERLEAVING METHOD
20230216807 · 2023-07-06 ·

A convolutional interleaver included in a time interleaver, which performs convolutional interleaving includes: a first switch that switches a connection destination of an input of the convolutional interleaver to one end of one of a plurality of branches; a FIFO memories provided in some of the plurality of branches except one branch, wherein a number of FIFO memories is different among the plurality of branches; and a second switch that switches a connection destination of an output of the convolutional interleaver to another end of one of the plurality of branches. The first and second switches switch the connection destination when the plurality of cells as many as the codewords per frame have passed, by switching a corresponding branch of the connection destination sequentially and repeatedly among the plurality of branches.