H04L47/562

BANDWIDTH ALLOCATION

An optical line terminal is disclosed. The optical line terminal comprises at least one processor; and at least one memory including machine-readable instructions. The at least one memory and the machine-readable instructions are configured to, with the at least one processor, cause the optical line terminal to determine based on one or more variables a relationship between bandwidth efficiency and latency for communication of contents of a queue buffer of an optical network unit with the optical line terminal via an optical distribution network, and determine a burst schedule for the queue buffer based on the determined relationship.

METHOD AND APPARATUS FOR SCHEDULING PACKETS FOR TRANSMISSION

A network device transfers packets from a packet memory to one or more network interfaces for transmission by the one or more network interfaces. The transferring of packets includes transferring the packets via one or more respective transmit data paths that correspond to one or more respective network interfaces. The network device measures one or more respective amounts of time required to transmit respective packet data within the one or more respective transmit data paths. The network device uses the one or more respective measured amounts of time to determine when to start transfer of packets from the packet memory to the one or more network interfaces via the one or more respective transmit data paths.

TDMA networking using commodity NIC/switch

A network element one or more network ports, network time circuitry and packet processing circuitry. The network ports are configured to communicate with a communication network. The network time circuitry is configured to track a network time defined in the communication network. In some embodiments the packet processing circuitry is configured to receive a definition of one or more timeslots that are synchronized to the network time, and to send outbound packets to the communication network depending on the timeslots. In some embodiments the packet processing circuitry is configured to process inbound packets, which are received from the communication network, depending on the timeslots.

WINDOW-BASED CONGESTION CONTROL
20230123387 · 2023-04-20 ·

Examples described herein relate to a network interface device that includes circuitry to cause transmission of a packet following transmission of one or more data packets to a receiver, wherein the packet comprises one or more of: a count of transmitted data, a timestamp of transmission of the packet, and/or an index value to one or more of a count of transmitted data and a timestamp of transmission of the packet. In some examples, the network interface device includes circuitry to receive, from the receiver, a second packet that includes a copy of the count of transmitted data and the timestamp of transmission of the packet or the index from the packet. In some examples, the network interface device includes circuitry to perform congestion control based on the received copy of the count of transmitted data and the timestamp of transmission of the packet.

Delay-based tagging in a network switch

A network device organizes packets into various queues, in which the packets await processing. Queue management logic tracks how long certain packet(s), such as a designated marker packet, remain in a queue. Based thereon, the logic produces a measure of delay for the queue, referred to herein as the “queue delay.” Based on a comparison of the current queue delay to one or more thresholds, various associated delay-based actions may be performed, such as tagging and/or dropping packets departing from the queue, or preventing addition enqueues to the queue. In an embodiment, a queue may be expired based on the queue delay, and all packets dropped. In other embodiments, when a packet is dropped prior to enqueue into an assigned queue, copies of some or all of the packets already within the queue at the time the packet was dropped may be forwarded to a visibility component for analysis.

APPARATUS, METHOD AND COMPUTER PROGRAM
20220321485 · 2022-10-06 ·

An apparatus (113) comprising means for performing: receiving one or more forwarding tables from a centralized network configuration entity (101) of a time sensitive network, wherein the forwarding tables comprise entry information; and determining, based at least in part on the one or more forwarding tables, rules for mapping at least one uplink data stream of the time sensitive network to at least one of: a protocol data unit session (135, 137) and a quality of service flow (129, 131, 133).

PACKET BUFFERING METHOD, INTEGRATED CIRCUIT SYSTEM, AND STORAGE MEDIUM
20220321492 · 2022-10-06 ·

This application relates to the field of data communication, and in particular, to a packet buffering method, an integrated circuit system, and a storage medium. The method can improve utilization of the on-chip buffer. The packet buffering method may be applied to a network device. The network device includes a first storage medium and a second storage medium. The first storage medium is a local buffer, and the second storage medium is an external buffer. The method may include: receiving a first packet, and identifying a queue number of the first packet, where the queue number indicates a queue for storing the first packet; querying a queue latency based on the queue number; determining a first latency threshold based on usage of the first storage medium; and buffering the first packet in the first storage medium or the second storage medium based on the queue latency and the first latency threshold.

Flowlet scheduler for multicore network processors
11683119 · 2023-06-20 · ·

Systems and methods of using a packet order work (POW) scheduler to assign packets to a set of scheduler queues for supplying packets to parallel processing units. A processing unit and the associated scheduler queue is dedicated to a specific flow until a queue-reallocation event, which may correspond to the associated scheduler queue being idle for at least a certain interval as indicated by its age counter, or the queue being the least recently used, when a new flow arrives. In this case, the scheduler queue and the associated processing unit may be reallocated to the new flow and disassociated with the previous flow. As a result, dynamic packet workload balancing can be advantageously achieved across the multiple processing paths.

System and method for equalizing transmission delay in a network

A network device includes an antenna connected to an RF chip and a processor coupled to an Ethernet port, the RF chip, a program memory, a packet buffer memory, a pointer buffer memory, and a program memory. The program memory contains instruction that, when executed by the processor, cause a plurality of packets received by the antenna and the RF chip in a first order to be stored in the packet buffer memory in such order, cause a pointer associated with each one of the plurality of packets to be stored in the pointer buffer memory, cause the pointers stored in the pointer buffer memory to be placed in a second order in accordance with a timestamp that is included with each packet, cause the packets stored in the packet buffer memory to be passed along to the Ethernet port in accordance with the sorted pointer to each packet.

WIRELESS COMMUNICATION SYSTEMS COEXISTENCE
20170331758 · 2017-11-16 ·

One or more wireless communication systems may coexist at about the same geographical location and be configured to access the same radio channel. The coexistence of these systems may cause collisions and degrade throughput. Although countermeasures, such as reserving airtime with a resource assignment, may be taken to avoid collisions, these countermeasures are degrading throughput as well. The reserved airtime is reduced by adding a start time to the resource assignment. The reduced airtime may be used by others for performing a transmission, thus increasing the throughput and efficiency of the access to the radio channel.