H04L47/50

Systems and methods for providing lockless bimodal queues for selective packet capture

In a network system, an application receiving packets can consume one or more packets in two or more stages, where the second and the later stages can selectively consume some but not all of the packets consumed by the preceding stage. Packets are transferred between two consecutive stages, called producer and consumer, via a fixed-size storage. Both the producer and the consumer can access the storage without locking it and, to facilitate selective consumption of the packets by the consumer, the consumer can transition between awake and sleep modes, where the packets are consumed in the awake mode only. The producer may also switch between awake and sleep modes. Lockless access is made possible by controlling the operation of the storage by the producer and the consumer both according to the mode of the consumer, which is communicated via a shared memory location.

Reorder resilient transport

Devices and techniques for reorder resilient transport are described herein. A device may store data packets in sequential positions of a flow queue in an order in which the data packets were received. The device may retrieve a first data packet from a first sequential position and a second data packet from a second sequential position that is next in sequence to the first sequential position in the flow queue. The device may store the first data packet and the second data packet in a buffer and refrain from providing the first data packet and the second data packet to upper layer circuitry if the packet order information for the first data packet and the second data packet indicate that the first data packet and the second data packet were received out of order. Other embodiments are also described.

Methods and apparatuses for packet scheduling for software-defined networking in edge computing environment

Provided are methods and apparatuses for packet scheduling for software-defined networking in an edge computing environment. A packet scheduling method according to an exemplary embodiment of the present disclosure comprises: receiving packets arriving at a queue connected to a switch in a software-defined network in an edge computing environment; moving the packets in the queue forward one position based on the order of arrival each time a packet is served by the switch; and if a new packet enters the switch while the buffer in the queue is full, pushing out the packet at the front and putting the new packet at the end of the queue.

Methods and apparatuses for packet scheduling for software-defined networking in edge computing environment

Provided are methods and apparatuses for packet scheduling for software-defined networking in an edge computing environment. A packet scheduling method according to an exemplary embodiment of the present disclosure comprises: receiving packets arriving at a queue connected to a switch in a software-defined network in an edge computing environment; moving the packets in the queue forward one position based on the order of arrival each time a packet is served by the switch; and if a new packet enters the switch while the buffer in the queue is full, pushing out the packet at the front and putting the new packet at the end of the queue.

Leasing prioritized items in namespace indices

A method, system, and computer program product for implementing indexes in a dispersed storage network (dsNet) are provided. The method accesses a work queue containing a set of work items as a set of key-value pairs. The key-value pairs are tuples including a work identifier and a work lease timestamp. The method selects a first work identifier and a first lease timestamp for a new work. The set of work items and the new work item are ordered according to a priority scheme to generate a modified work queue. Based on the modified work queue, the method transmits a work request to a plurality of data source units. The work request including a hash parameter and a bit parameter. The hash parameter is associated with a key-value pair of the modified work queue. The bit parameter indicates a number of bits of the hash parameter to consider.

Leasing prioritized items in namespace indices

A method, system, and computer program product for implementing indexes in a dispersed storage network (dsNet) are provided. The method accesses a work queue containing a set of work items as a set of key-value pairs. The key-value pairs are tuples including a work identifier and a work lease timestamp. The method selects a first work identifier and a first lease timestamp for a new work. The set of work items and the new work item are ordered according to a priority scheme to generate a modified work queue. Based on the modified work queue, the method transmits a work request to a plurality of data source units. The work request including a hash parameter and a bit parameter. The hash parameter is associated with a key-value pair of the modified work queue. The bit parameter indicates a number of bits of the hash parameter to consider.

COMMUNICATION EQUIPMENT, COMMUNICATION METHODS AND PROGRAMS

An object is to provide a communication apparatus, a communication method, and a program capable of avoiding an increase in network load when input traffic continues to be large and a communication delay when input traffic is very small. A communication apparatus according to the present invention prepares three token buckets and can transfer, discard, or hold a packet in accordance with the amount of tokens in each token bucket. This enables the communication apparatus to operate so as not to exceed a set maximum bandwidth when large traffic is received for the delay guarantee shaping. Further, When the maximum bandwidth is exceeded, the communication apparatus can select whether to discard a packet to prioritize a delay guarantee or to hold a packet to prioritize no loss of packets. Furthermore, the communication apparatus can immediately transmit a packet without increasing a communication delay when input traffic is very small.

BANDWIDTH ALLOCATION

An optical line terminal is disclosed. The optical line terminal comprises at least one processor; and at least one memory including machine-readable instructions. The at least one memory and the machine-readable instructions are configured to, with the at least one processor, cause the optical line terminal to determine based on one or more variables a relationship between bandwidth efficiency and latency for communication of contents of a queue buffer of an optical network unit with the optical line terminal via an optical distribution network, and determine a burst schedule for the queue buffer based on the determined relationship.

METHOD AND APPARATUS FOR SCHEDULING PACKETS FOR TRANSMISSION

A network device transfers packets from a packet memory to one or more network interfaces for transmission by the one or more network interfaces. The transferring of packets includes transferring the packets via one or more respective transmit data paths that correspond to one or more respective network interfaces. The network device measures one or more respective amounts of time required to transmit respective packet data within the one or more respective transmit data paths. The network device uses the one or more respective measured amounts of time to determine when to start transfer of packets from the packet memory to the one or more network interfaces via the one or more respective transmit data paths.

METHOD AND APPARATUS FOR SCHEDULING PACKETS FOR TRANSMISSION

A network device transfers packets from a packet memory to one or more network interfaces for transmission by the one or more network interfaces. The transferring of packets includes transferring the packets via one or more respective transmit data paths that correspond to one or more respective network interfaces. The network device measures one or more respective amounts of time required to transmit respective packet data within the one or more respective transmit data paths. The network device uses the one or more respective measured amounts of time to determine when to start transfer of packets from the packet memory to the one or more network interfaces via the one or more respective transmit data paths.