H04L49/9047

MECHANISM TO IMPLEMENT TIME STAMP-BASED TRANSMISSIONS FROM AN NETWORK INTERFACE DEVICE OF A DATACENTER

A circuitry of a network interface device of a computing network is to: access a first message from a server architecture of the computing network, the first message including a timestamp based on a time at which the circuitry is to access, from a host memory, one or more data packet descriptors that correspond to a data packet to be transmitted to the computing network from the network interface device; send, for transmission to the server architecture and at a transmission time based on the timestamp, a second message, the second message including a request to access the one or more data packet descriptors; and subsequent to sending the second message for transmission, access the one or more data packet descriptors to determine one or more addresses for the data packet in the host memory.

MECHANISM TO IMPLEMENT TIME STAMP-BASED TRANSMISSIONS FROM AN NETWORK INTERFACE DEVICE OF A DATACENTER

A circuitry of a network interface device of a computing network is to: access a first message from a server architecture of the computing network, the first message including a timestamp based on a time at which the circuitry is to access, from a host memory, one or more data packet descriptors that correspond to a data packet to be transmitted to the computing network from the network interface device; send, for transmission to the server architecture and at a transmission time based on the timestamp, a second message, the second message including a request to access the one or more data packet descriptors; and subsequent to sending the second message for transmission, access the one or more data packet descriptors to determine one or more addresses for the data packet in the host memory.

Wide Elastic Buffer
20230239256 · 2023-07-27 ·

A receiving device uses an elastic buffer that is wider than the number of data elements transferred in each cycle. To compensate for frequency differences between the transmitter and the receiver, the transmitting device periodically sends a skip request with a default number of skip data elements. If the elastic buffer is filling, the receiving device ignores one or more of the skip data elements. If the elastic buffer is emptying, the receiving device adds one or more skip data elements to the skip request. To maintain the ordering of data despite the manipulation of the skip data elements, two rows of the wide elastic buffer are read at a time. This allows construction of a one-row result from any combination of the data elements of the two rows. The column pointers are adjusted appropriately, to ensure that they continue to point to the next data to be read.

Methods and systems for data transmission
11570120 · 2023-01-31 · ·

A method for data transmission may be implemented on an electronic device having one or more processors. The one or more processors may include a master queue including a master queue head and a plurality of primary ports that are connected to each other using a serial link. The method may include operating the master queue head to obtain a message. The method may also include operating the master queue head to segment the message into a plurality of segments. The method may also include operating the master queue head to transmit the plurality of segments to a first primary port of the plurality of primary ports in the master queue. The method may also include operating the first primary port to transmit the plurality of segments to a second primary port of the plurality of primary ports in the master queue.

Methods and systems for data transmission
11570120 · 2023-01-31 · ·

A method for data transmission may be implemented on an electronic device having one or more processors. The one or more processors may include a master queue including a master queue head and a plurality of primary ports that are connected to each other using a serial link. The method may include operating the master queue head to obtain a message. The method may also include operating the master queue head to segment the message into a plurality of segments. The method may also include operating the master queue head to transmit the plurality of segments to a first primary port of the plurality of primary ports in the master queue. The method may also include operating the first primary port to transmit the plurality of segments to a second primary port of the plurality of primary ports in the master queue.

PROCESSING OF ETHERNET PACKETS AT A PROGRAMMABLE INTEGRATED CIRCUIT

Methods, systems, and computer programs are presented for processing Ethernet packets at a Field Programmable Gate Array (FPGA). One programmable integrated circuit includes: an internal network on chip (iNOC) comprising rows and columns; clusters, coupled to the iNOC, comprising a network access point (NAP) and programmable logic; and an Ethernet controller coupled to the iNOC. When the controller operates in packet mode, each complete inbound Ethernet packet is sent from the controller to one of the NAPs via the iNOC, where two or more NAPs are configurable to receive the complete inbound Ethernet packets from the controller. The controller is configurable to operate in quad segment interface (QSI) mode where each complete inbound Ethernet packet is broken into segments, which are sent from the controller to different NAPs via the iNOC, where two or more NAPs are configurable to receive the complete inbound Ethernet packets from the controller.

METHODS AND SYSTEMS FOR EXCHANGING NETWORK PACKETS BETWEEN HOST AND MEMORY MODULE USING MULTIPLE QUEUES

A method and system for exchanging network packets in a memory system is provided. A size of each network packet to be transmitted is determined. Each network packets is segregated into one of plural queues based on the size of the network packet. Each network packet is transmitted over a shared memory, according to the queue in which the network packet is segregated.

METHODS AND SYSTEMS FOR EXCHANGING NETWORK PACKETS BETWEEN HOST AND MEMORY MODULE USING MULTIPLE QUEUES

A method and system for exchanging network packets in a memory system is provided. A size of each network packet to be transmitted is determined. Each network packets is segregated into one of plural queues based on the size of the network packet. Each network packet is transmitted over a shared memory, according to the queue in which the network packet is segregated.

Network interface and buffer control method thereof
11700214 · 2023-07-11 · ·

A network interface includes a processor, memory, and a cache between the processor and the memory. The processor secures a plurality of buffers for storing transfer data in the memory, and manages an allocation order of available buffers of the plurality of buffers. The processor returns a buffer released after data transfer to a position before a predetermined position of the allocation order.

Fair arbitration between multiple sources targeting a destination

A hardware module comprises at least a first ingress buffer and a second ingress buffer, where the second ingress buffer holds data packets from a plurality of source components. To ensure fairness between one or more sources providing data to the first ingress buffer and the plurality of sources providing data to the second ingress buffer, processing circuitry examines source identifiers in packets held in the second ingress buffer and selects between the buffers so as to arbitrate between the sources. In some embodiments, the examination of the source identifiers provides statistics for a weighted round robin between the ingress buffers. In other embodiments, the source identifier of whichever packet is currently at the head of the second ingress buffer is used to perform a simple round robin between the sources.