H04L49/3072

VXLAN Packet Transmission
20180013687 · 2018-01-11 ·

In an example, a SDN controller may acquire a path maximum transmission unit (PMTU) of a Virtual Extensible Local Area Network (VXLAN) tunnel from a source VXLAN tunnel end point (VTEP) to a destination VTEP of a data packet, and may transmit a control entry to the source VTEP in such a way that an individual VXLAN packet has a length within the packet length corresponding to the PMTU.

Efficient packet queueing for computer networks
11552907 · 2023-01-10 · ·

A method during a first cycle includes receiving, at a first port of a device, a plurality of network packets. The method may include storing, by the device, at least some portion of a first packet of the plurality of network packets at a first address within a first record bank and storing, by the device and concurrent with storing the at least some portion of the first packet from the first address, at least some portion of a second packet of the plurality of network packets at a second address within a second record bank, different than the first record bank. The method may further include storing, by the device, the first address within the first record bank and the second address within the second record bank in the first link stash associated with the first record bank and updating, by the device, a tail pointer to reference the second address.

Zero-copy processing
20230099304 · 2023-03-30 ·

In one embodiment, a system includes a peripheral device including a memory access interface to receive from a host device headers of packets, while corresponding payloads of the packets are stored in a host memory of the host device, and descriptors being indicative of respective locations in the host memory at which the corresponding payloads are stored, a data processing unit memory to store the received headers and the descriptors without the payloads of the packets, and a data processing unit to process the received headers, wherein the peripheral device is configured, upon completion of the processing of the received headers by the data processing unit, to fetch the payloads of the packets over the memory access interface from the respective locations in the host memory responsively to respective ones of the descriptors, and packet processing circuitry to receive the headers and payloads of the packets, and process the packets.

Multi-part TCP connection over VPN

An encrypted tunnel is established between a virtual private network (VPN) server and a VPN user device. A request to establish a connection with a target device is received from the VPN user device. The request uses initial connection parameters. The connection the converted into a first connection between the VPN user device and the VPN server and a second connection between the VPN server and the target device. The first connection uses first connection parameters and the second connection uses second connection parameters. At least one parameter of the first connection parameters or of the second connection parameters is different from a corresponding parameter of the initial connection parameters. First network packets received from the VPN user device according to the first connection parameters are converted into second network packets according to the second connection parameters. The second network packets are transmitted to the target device.

Method and apparatus for assigning data to split bearers in dual connectivity

A method and an apparatus for assigning data to split bearers in dual connectivity is provided. The apparatus includes a master evolved Node B (MeNB) of a user equipment (UE) configured to receive information of available buffer decided and transmitted by a secondary eNB (SeNB) through an X2 interface between the MeNB and the SeNB, determine whether the information is about available buffer for a UE or for an evolved radio access bearer (E-RAB) established on the SeNB based on an indicator in the information or a bearer that transported the information, and adjust the amount of data assigned to the SeNB according to the information of the available buffer. The apparatus can accommodate eNBs implemented in various manners, make full use of the bandwidth of data bearers, and reduce delay in data transmission.

Decoupled packet and data processing rates in switch devices

Continuing to integrate more aggregate bandwidth and higher radix into switch devices is an economic imperative because it creates value both for the supplier and customer in large data center environments which are an increasingly important part of the marketplace. While new silicon processes continue to shrink transistor and other chip feature dimensions, process technology cannot be relied upon as a key driver of power reduction. Transitioning from 28 nm to 16 nm is a special case where FinFET provides additional power scaling, but subsequent FinFET nodes are not expected to deliver as substantial of power reductions to meet the desired increases in integration. The disclosed switch architecture attacks the power consumption problem by controlling the rate at which power-consuming activities occur.

Filtering and route lookup in a switching device

Methods and devices for processing packets are provided. The processing device may include an input interface for receiving data units containing header information of respective packets; a first module configurable to perform packet filtering based on the received data units; a second module configurable to perform traffic analysis based on the received data units; a third module configurable to perform load balancing based on the received data units; and a fourth module configurable to perform route lookups based on the received data units.

Packed ingress interface for network apparatuses
11201831 · 2021-12-14 · ·

Multiple ports of a network device are muxed together to form a single packed ingress interface into a buffer. A multiplexor alternates between the ports in alternating input clock cycles. Extra logic and wiring to provide a separate writer for each port is avoided, since the packed interface and buffer writers operate at higher speeds and/or have more bandwidth than the ports, and are thus able to handle incoming data for all of the ports coupled to the packed ingress interface. A packed ingress interface may also or instead support receiving data for multiple data units (e.g. multiple packets) from a single port in a single clock cycle, thereby reducing the potential to waste bandwidth at the end of data units. The interface may send the ending segments of the first data unit to the buffer. However, the interface may hold back the starting segments of the second data unit in a cache. In an embodiment, a gear shift merges the first part of each subsequent portion of the second data unit with the cached data to form a full-sized data unit portion to send downstream, while the second part of the portion replaces the cached data. When the end of the second data unit is detected, if any segments of the second data unit remain after the merger process, the remainder is sent downstream as a separate portion at the same time.

Flow control for a multiple flow control unit interface
11362939 · 2022-06-14 · ·

Implementations of the present disclosure are directed to systems and methods for flow control using a multiple flit interface. A credit return field is used in a credit-based flow control system to indicate that one or more credits are being returned to a sending device from a receiving device. Based on the number of credits available, the sending device determines whether to send device or wait until more credits are returned. The amount of buffer space used by the receiver to store the packet is determined by the number of transfer cycles used to receive the packet, not the number of flits comprising the packet. This is enabled by having the buffer be as wide as the bus. The receiver returns credits to the sender based on the number of buffer rows used to store the received packet, not the number of flits comprising the packet.

Multi-part TCP connection over VPN
20220166647 · 2022-05-26 ·

An encrypted tunnel is established between a virtual private network (VPN) server and a VPN user device. A request to establish a connection with a target device is received from the VPN user device. The request uses initial connection parameters. The connection the converted into a first connection between the VPN user device and the VPN server and a second connection between the VPN server and the target device. The first connection uses first connection parameters and the second connection uses second connection parameters. At least one parameter of the first connection parameters or of the second connection parameters is different from a corresponding parameter of the initial connection parameters. First network packets received from the VPN user device according to the first connection parameters are converted into second network packets according to the second connection parameters. The second network packets are transmitted to the target device.