H04L49/9063

THROTTLING OUTPUT WITH ONGOING INPUT
20190199645 · 2019-06-27 ·

A plurality of communications sent from a sending program can be stored in a queue for the duration of a time period specified by a timer. When the timer expires, a specified program module can be executed that merges the plurality of communications into a single result. The singe result can be sent to a receiving program. Incoming communications are not throttled or delayed.

System and method for centralized virtual interface card driver logging in a network environment
10313380 · 2019-06-04 · ·

A method is provided in one example and includes creating a staging queue in a virtual interface card (VIC) adapter firmware of a server based on a log policy; receiving a log message from a VIC driver in the server; copying the log message to the staging queue; generating a VIC control message comprising the log message from the staging queue; and sending the VIC control message to a switch.

Spatial dispersion buffer
12021763 · 2024-06-25 · ·

An improved buffer for networking and other computing devices comprises multiple memory instances, each having a distinct set of entries. Transport data units (TDUs) are divided into storage data units (SDUs), and each SDU is stored within a separate entry of a separate memory instance in a logical bank. One or more grids of the memory instances are organized into overlapping logical banks. The logical banks are arranged into views. Different destinations or other entities are assigned different views of the buffer. A memory instance may be shared between logical banks in different views. When overlapping logical banks are accessed concurrently, data in a memory instance that they share may be recovered using a parity SDU in another memory instance. The shared buffering enables more efficient buffer usage in a network device with a traffic manager shared amongst egress bocks. Example read and write algorithms for such buffers are disclosed.

On chip router
12010033 · 2024-06-11 · ·

There is disclosed a router for routing data on a computing chip comprising a plurality of processing elements, the router comprising: a packet processing pipeline; a dropped packet buffer; and one or more circuits configured to: determine that a data packet in the packet processing pipeline is to be dropped; move the data packet that is to be dropped from the packet processing pipeline to the dropped packet buffer; and re-insert the dropped data packet from the dropped packet buffer into the packet processing pipeline for re-processing.

SHARED MEMORY COMMUNICATION IN SOFTWARE DEFINED NETWORKING
20190158403 · 2019-05-23 ·

A network controller being executed by a processing device generates a file descriptor indicating when the network controller or a virtual switch being executed by the processing device stored a packet to a shared memory buffer. At least one of a read or write operation being performed on the packet stored at the shared memory buffer is identified. In response to identifying the at least one of the read or write operation being performed on the packet, the file descriptor is modified in view of the at least one of the read or write operation being performed on the packet.

Packet Loss Mitigation in an Elastic Container-Based Network

Packet loss mitigation may be provided. First, queue control data may be sent to a first container and then a route may be stalled after sending the queue control data. The route may correspond to a data path that leads to the first container. Next, modified queue control data may be received from the first container and the first container may be deleted safely with empty queues, preventing packet loss in response to receiving the modified queue control data.

Throttling output with ongoing input

A plurality of communications sent from a sending program can be stored in a queue for the duration of a time period specified by a timer. When the timer expires, a specified program module can be executed that merges the plurality of communications into a single result. The singe result can be sent to a receiving program. Incoming communications are not throttled or delayed.

Shared memory communication in software defined networking
10230633 · 2019-03-12 · ·

A virtual switch executes on a computer system to forward packets to one or more destinations. A method of the disclosure includes receiving, by a virtual switch application being executed by a processing device, a packet comprising a header, determining, that the packet does not match a distribution table associated with the virtual switch and storing, by the processing device, the packet to a shared memory buffer that is accessible to a network controller application being executed by the processing device.

ALTERNATE ACKNOWLEDGMENT (ACK) SIGNALS IN A COALESCING TRANSMISSION CONTROL PROTOCOL/INTERNET PROTOCOL (TCP/IP) SYSTEM
20190058780 · 2019-02-21 ·

Alternate acknowledgment (ACK) signals in a coalescing Transmission Control Protocol/Internet Protocol (TCP/IP) system are disclosed. In one aspect, a network interface card (NIC) examines packet payloads, and the NIC generates an ACK signal for a sending server before sending a coalesced packet to an internal processor. Further, the NIC may examine incoming packets and send an ACK signal to the internal processor for ACK signals that are received from the sending server before sending the coalesced packet to the internal processor. By extracting and sending the ACK signals before sending the corresponding payloads in the coalesced packet, latency that would otherwise be incurred waiting for the ACK signal is eliminated. Elimination of such latency may improve network performance and may provide power savings.

Device and method for packet processing with memories having different latencies

A packet processing system and method for processing data units are provided. A packet processing system includes a processor, first memory having a first latency, and second memory having a second latency that is higher than the first latency. A first portion of a queue for queuing data units utilized by the processor is disposed in the first memory, and a second portion of the queue is disposed in the second memory. A queue manager is configured to push new data units to the second portion of the queue and generate an indication linking a new data unit to an earlier-received data unit in the queue. The queue manager is configured to transfer one or more queued data units from the second portion of the queue to the first portion of the queue prior to popping the queued data unit from the queue, and to update the indication.