H04L47/62

SERVER DELAY CONTROL SYSTEM, SERVER DELAY CONTROL DEVICE, SERVER DELAY CONTROL METHOD, AND, PROGRAM
20230028832 · 2023-01-26 ·

Provided is a server delay control system for performing, on a server including a Host OS, packet transfer between a physical NIC connected to the Host OS and an application deployed in a user space. A server delay control device configured to perform polling for packet transfer on behalf of the application is deployed in the user space. The server delay control device creates, between the application and the physical NIC, a communication path for communication via socket communication. The communication path includes a first queue and a second queue. The server delay control device includes: a packet dequeuer configured to poll whether a packet has been enqueued into the first queue and to dequeue the enqueued packet from the first queue; and a packet enqueuer configured to enqueue the dequeued packet into the second queue in the same context as the polling and dequeuing without causing a context switch.

Multidrop network system

A multidrop network system includes N network devices. The N network devices includes M transmission-permissible devices including a master device and at least one slave device, wherein M is not greater than N. Each transmission-permissible device has at least one identification code as its identification in the multidrop network system, and the M transmission-permissible devices have at least N identification codes. The M transmission-permissible devices obtain transmission opportunities in turn according to their respective identification codes in each round of data transmission. A K.sup.th device among the M transmission-permissible devices has multiple identification codes, and thus obtains multiple transmission opportunities in one round of data transmission. Each of the M transmission-permissible devices performs a count operation and generates a current count value; and when the current count value is the same as the identification code of a device of the M transmission-permissible devices, this device earns one transmission opportunity.

DYNAMIC LOAD BALANCING FOR MULTI-CORE COMPUTING ENVIRONMENTS

Methods, apparatus, systems, and articles of manufacture are disclosed for dynamic load balancing for multi-core computing environments. An example apparatus includes a first and a plurality of second cores of a processor, and circuitry in a die of the processor separate from the first and the second cores, the circuitry to enqueue identifiers in one or more queues in the circuitry associated with respective ones of data packets of a packet flow, allocate one or more of the second cores to dequeue first ones of the identifiers in response to a throughput parameter of the first core not satisfying a throughput threshold to cause the one or more of the second cores to execute one or more operations on first ones of the data packets, and provide the first ones to one or more data consumers to distribute the first data packets.

DYNAMIC LOAD BALANCING FOR MULTI-CORE COMPUTING ENVIRONMENTS

Methods, apparatus, systems, and articles of manufacture are disclosed for dynamic load balancing for multi-core computing environments. An example apparatus includes a first and a plurality of second cores of a processor, and circuitry in a die of the processor separate from the first and the second cores, the circuitry to enqueue identifiers in one or more queues in the circuitry associated with respective ones of data packets of a packet flow, allocate one or more of the second cores to dequeue first ones of the identifiers in response to a throughput parameter of the first core not satisfying a throughput threshold to cause the one or more of the second cores to execute one or more operations on first ones of the data packets, and provide the first ones to one or more data consumers to distribute the first data packets.

Messaging system of partial and out-of-order events

Methods, systems, and computer readable medium are provided for receiving an event message in a plurality of event messages, the event message comprising a sequence number and associated data, identifying the event message as an out-of-order event message based on the sequence number, assigning a priority level to the out-of-order event message based on a plurality of priority rules, and placing the out-of-order event message in a primary queue of messages based on the priority level assigned to the event message.

Increasing QoS throughput and efficiency through lazy byte batching

Described embodiments improve the performance of a computer network via selectively forwarding packets to bypass quality of service (QoS) processing, avoiding processing delays during critical periods of high demand, increasing throughput and efficiency may be increased by sacrificing a small amount of QoS accuracy. QoS processing may be applied to a subset of packets of a flow or connection, referred to herein as “lazy” processing or lazy byte batching. Packets that bypass QoS processing may be immediately forwarded with the same QoS settings as packets of the flow for which QoS processing is applied, resulting in tremendous overhead savings with only minimal decline in accuracy.

Marking packets based on egress rate to indicate congestion

A network device includes a rate measurement circuit that is configured to measure respective egress rates at which respective data is being transmitted via respective ports associated with the network device. A marking ratio determination circuit is configured to select respective marking ratios based on respective measured egress rates, the marking ratios for marking packets to be transmitted via the respective ports to indicate respective levels of congestion corresponding to the respective ports. Different marking ratios correspond to different measured egress rates. A packet editor circuit is configured to mark selected packets to be transmitted via respective ports according to the respective selected marking ratios. The respective selected marking ratios indicate to other communication devices that respective network paths via which the selected packets travelled experienced congestion, and the respective marking ratios indicate respective levels of congestion.

Marking packets based on egress rate to indicate congestion

A network device includes a rate measurement circuit that is configured to measure respective egress rates at which respective data is being transmitted via respective ports associated with the network device. A marking ratio determination circuit is configured to select respective marking ratios based on respective measured egress rates, the marking ratios for marking packets to be transmitted via the respective ports to indicate respective levels of congestion corresponding to the respective ports. Different marking ratios correspond to different measured egress rates. A packet editor circuit is configured to mark selected packets to be transmitted via respective ports according to the respective selected marking ratios. The respective selected marking ratios indicate to other communication devices that respective network paths via which the selected packets travelled experienced congestion, and the respective marking ratios indicate respective levels of congestion.

Prioritized MSRP transmissions to reduce traffic interruptions
11706278 · 2023-07-18 · ·

This technology enables prioritization of Multiple Stream Reservation Protocol (“MSRP”) transmissions in Audio Video Bridging (“AVB”) virtual local area networks (“VLANs”). An AVB switch receives a status from listener devices, associates a state with each of the statuses indicating whether each listener device is active or in-active, and stores each state in a database. For each listener device, a queue of MSRP protocol data unit (“PDU”) packets exists to be transmitted to the listener device. The AVB switch searches the database for listener devices with an active state, searches the queue for each active listener device for packets associated with an active state, and transmits the packets associated with the active state to each active listener device. Subsequently, the AVB switch searches each listener device's queue for packets associated with an in-active state and transmits the packets associated with an in-active state to each listener device.

Methods and systems for queue and pipeline latency metrology in network devices and smart NICs

Inbound packets can be received by a network device that determines a receive pipeline latency metric based on a plurality of receive pipeline residency times of the inbound packets and determines a receive queue latency metric based on a plurality of receive queue residency times of the inbound packets. The receive queue latency metric and the receive pipeline latency metric can be reported to a data collector. The network appliance may also receive a plurality of outbound packets on a transmit queue, determine a transmit queue latency metric based on the transmit queue residency times of the outbound packets, and determine a transmit pipeline latency metric based on the transmit pipeline residency times of the outbound packets. The outbound packets may be transmitted toward their destination. The transmit queue latency metric and the transmit pipeline latency metric can be reported to the data collector.