H04L12/863

Rate Update Engine For Reliable Transport Protocol

A system includes a first processor configured to analyze packets received over a communication protocol system and determine one or more congestion indicators from the analysis of the data packets, the one or more congestion indicators being indicative of network congestion for data packets transmitted over a reliable transport protocol layer of the communication protocol system. The system also includes a rate update engine separate from the packet datapath and configured to operate a second processor to receive the determined one or more congestion indicators, determine one or more congestion control parameters for controlling transmission of data packets based on the received one or more congestion indicators, and output a congestion control result based on the determined one or more congestion control parameters.

Communication apparatus, control method, and storage medium
11206220 · 2021-12-21 · ·

If a communication apparatus is to transmit data to another communication apparatus and communication via a communication unit included in the other communication apparatus is not performable, whether or not to transmit a frame for causing a transition to a state where the communication via the communication unit included in the other communication apparatus is performable is selected based on an amount of data accumulated in a transmission queue in which the data is stored.

Online task dispatching and scheduling system and method thereof

The present disclosure relates to an online task dispatching and scheduling system. The system includes an end device; an access point (AP) configured to receive a task from the end device; one or more edge servers configured to receive the task from the AP, the one or more edge servers including a task waiting queue, a processing pool, a task completion queue, and a scheduler, wherein the AP further includes a dispatcher utilizing Online Learning (OL) for determining a real-time state of network conditions and server loads; and the AP selects a target edge server from the one or more edge servers to which the task is to be dispatched; and wherein the scheduler utilizes Deep Reinforcement Learning (DRL) in generating a task scheduling policy for the one or more edge servers.

In-vehicle network system
11201685 · 2021-12-14 · ·

An in-vehicle network system includes first device and second device configured to send or receive to or from each other, and an intermediate node connected between the first device and the second device, the intermediate node being configured to output buffered messages in a sequence determined by a relative priority scheme. The first device includes a control unit configured to measure a communication delay for each of a plurality of different priority messages, set a delay representative value less than a maximum value of the plurality of communication delays, and adjust time that a time management unit manages, based on time that the first device manages, time that the second device manages, and the delay representative value.

Packed ingress interface for network apparatuses
11201831 · 2021-12-14 · ·

Multiple ports of a network device are muxed together to form a single packed ingress interface into a buffer. A multiplexor alternates between the ports in alternating input clock cycles. Extra logic and wiring to provide a separate writer for each port is avoided, since the packed interface and buffer writers operate at higher speeds and/or have more bandwidth than the ports, and are thus able to handle incoming data for all of the ports coupled to the packed ingress interface. A packed ingress interface may also or instead support receiving data for multiple data units (e.g. multiple packets) from a single port in a single clock cycle, thereby reducing the potential to waste bandwidth at the end of data units. The interface may send the ending segments of the first data unit to the buffer. However, the interface may hold back the starting segments of the second data unit in a cache. In an embodiment, a gear shift merges the first part of each subsequent portion of the second data unit with the cached data to form a full-sized data unit portion to send downstream, while the second part of the portion replaces the cached data. When the end of the second data unit is detected, if any segments of the second data unit remain after the merger process, the remainder is sent downstream as a separate portion at the same time.

SYSTEMS AND METHODS FOR INTELLIGENT THROUGHPUT DISTRIBUTION AMONGST APPLICATIONS OF A USER EQUIPMENT

A method of distributing throughput intelligently amongst a plurality of applications residing at a User Equipment (UE) is provided. The method includes receiving, at the UE, recommended bit rate (RBR) information from a network node, the RBR information indicating a throughput value allocated to the UE, allocating a codec rate from the allocated throughput value to at least one voice over internet protocol (VoIP) application from the plurality of applications, and allocating, from remaining throughput value of the allocated throughput value, a bit rate to each of a plurality of non-VoIP applications from the plurality of applications, based on corresponding throughput requirement associated with the plurality of non-VoIP applications.

Detecting and measuring microbursts in a networking device

Systems, methods, and computer-readable storage media for monitoring queue occupancy in a network buffer, detecting microbursts, and analyzing the same. An ASIC device can monitor a queue occupancy value of a network buffer, detect when the queue occupancy value exceeds a first predetermined threshold queue occupancy, create a record with a time that the queue occupancy value exceeds the first predetermined threshold queue occupancy, a queue occupancy value at the time that the queue occupancy value exceeds the first predetermined threshold queue occupancy, detect when the queue occupancy value falls below a second predetermined threshold queue occupancy, and determine a maximum queue occupancy value between the time that the queue occupancy value exceeded the first predetermined threshold queue occupancy and a time that the queue occupancy value falls below the second predetermined threshold queue occupancy, and add to the record the maximum queue occupancy value, a time of the maximum queue occupancy value, the time that the queue that the queue occupancy value falls below the second predetermined threshold queue occupancy and the queue occupancy value at the time that the queue occupancy value falls below the second predetermined threshold queue occupancy.

Heterogeneous multi-protocol stack method, apparatus, and system

A heterogeneous multi-protocol stack system including a plurality of heterogeneous protocol stack instances is described. Resource allocation between the protocol stack instances is unbalanced, and algorithms are independently configured, so that QoS capacities of different protocol stack instances are different. Data packets of applications or connections with different QoS requirements can be dispatched by a dispatcher to corresponding protocol stack instances at a high speed. When system resources are limited, the heterogeneous multi-protocol stack system is capable of simultaneously supporting classification optimization processing performed on data of a high-concurrency application, a high-throughput application, and a low-delay application, so as to meet QoS requirements of different types of applications, thereby improving user experience.

Encapsulation of data packets

Example embodiments describe a transmitter including data encapsulation circuitry configured to encapsulate data packets into Data Transport Units, DTUs, for further transmission over a communication medium. The data packets have respective Quality of Service, QoS, tolerances. The data encapsulation circuitry is configured to delay transmission of first data packets with a lower QoS tolerance and to group the first data packets in a subset of DTUs available for transportation of the first data packets.

Network layer channel bonding

Implementations of the disclosure are directed to network layer channel bonding. In one implementation, a method comprises: operating a first communication device to transmit data to a second communication device over multiple communication links, each of the communication links associated with a respective communication medium; receiving, at the first communication device, an input data stream for transmission to the second communication device, the input data stream comprising packets; determining, at the first communication device, a throughput and latency of each of the communication links; based on the determined throughput and latency of each of the communication links: dividing the packets into multiple sets, each of the sets configured to be transmitted by the first communication device over a respective one of the communication links; and transmitting, from the first communication device to the second communication device, each of the sets of packets over the set's respective communication link.