Patent classifications
H04L49/9068
Asymmetric network infrastructure with distributed broadcast-select switch and asymmetric network interface controller
Network infrastructure systems including asymmetric Distributed Broadcast Select Switches and Asymmetric Network Interface Controllers for implementation in asymmetric networks and more particularly in cluster networks are provided.
FORWARDING ELEMENT WITH PHYSICAL AND VIRTUAL DATA PLANES
Some embodiments of the invention provide a novel method of performing network slice-based operations on a data message at a hardware forwarding element (HFE) in a network. For a received data message flow, the method has the HFE identify a network slice associated with the received data message flow. This network slice in some embodiments is associated with a set of operations to be performed on the data message by several network elements, including one or more machines executing on one or more computers in the network. Once the network slice is identified, the method has the HFE process the data message flow based on a rule that applies to data messages associated with the identified slice.
Data processing method, network interface card, and server
A data processing method comprising: After receiving an i.sup.th Peripheral Component Interconnect Express (PCIe) packet, a network interface card stores a j.sup.th instruction segment in a j.sup.th storage unit that is in a first storage area. When all n instruction segments of a first send queue entry (SQE) are stored in the first storage area, the network interface card obtains the first SQE, an identifier of a queue pair (QP) to which the first SQE belongs, and a location identifier of the first SQE in the QP according to the instructions in n storage units in the first storage area; the network interface card performs data processing based on the identifier of the QP to which the first SQE belongs and the location identifier of the first SQE in the QP.
DATA TRANSMISSION METHOD, CHIP, AND DEVICE
A data transmission method is provided. The method includes: a network interface card of a source device obtains a first notification message and a second notification message, wherein the first notification message indicates that a first to-be-processed remote direct memory access (RDMA) request exists in a first queue of the source device, the first queue stores a request of a first service application in the source device, the second notification message indicates that a second to-be-processed RDMA request exists in a second queue of the source device, and the second queue stores a request of a second service application in the source device; and the network interface card determines a processing sequence of the first queue and the second queue based on service levels, and sends the first to-be-processed RDMA request and the second to-be-processed RDMA request to a destination device according to the processing sequence.
Technologies for packet forwarding on ingress queue overflow
Technologies for packet forwarding under ingress queue overflow conditions includes a computing device configured to receive a network packet from another computing device, determine whether a global packet buffer of the NIC is full, and determine, in response to a determination that the global packet buffer is full, whether to forward all the global packet buffer entries. The computing device is additionally configured to compare, in response to a determination not to forward all the global packet buffer entries, a selection filter to one or more characteristics of the received network packet and forward, in response to a determination that the selection filter matches the one or more characteristics of the received network packet, the received network packet to a predefined output. Other embodiments are described herein.
Flow table aging optimized for DRAM access
A flow table management system can include a hardware memory module communicatively coupled to a network interface card. The hardware memory module is configured to store a flow table including a plurality of network flow entries. The network interface card further includes a flow table age cache configured to store a set of recently active network flows and a flow table management module configured to manage a duration for which respective network flow entries in the flow table stored in the hardware memory module remain in the flow table using the flow table age cache. In some implementations, age information about each respective flow in the flow table is stored in the hardware memory module in an age state table that is separate from the flow table.
CLOUD SERVER SYSTEM
Provided is a cloud server system, the system comprising a plurality of multi-root input/output virtualized PCIE switches (MR-IOV Switches) that are interconnected each other. The cloud server system based on the MR-IOV PCIE Switch in the present invention can well meet the design requirements of the cloud servers very well, with a high performance-to-consumption ratio, strong overall service capability, low cost, low power consumption and high energy efficiency. I/O virtualization is realized architecture, thus maximally ensuring the performance of the server.
Flow Table Aging Optimized For Dram Access
A flow table management system can include a hardware memory module communicatively coupled to a network interface card. The hardware memory module is configured to store a flow table including a plurality of network flow entries. The network interface card further includes a flow table age cache configured to store a set of recently active network flows and a flow table management module configured to manage a duration for which respective network flow entries in the flow table stored in the hardware memory module remain in the flow table using the flow table age cache. In some implementations, age information about each respective flow in the flow table is stored in the hardware memory module in an age state table that is separate from the flow table.
Receive buffer management
Examples described herein can be used to allocate replacement receive buffers for use by a network interface, switch, or accelerator. Multiple refill queues can be used to receive identifications of available receive buffers. A refill processor can select one or more identifications from a refill queue and allocate the identifications to a buffer queue. None of the refill queues is locked from receiving identifications of available receive buffers but merely one of the refill buffers is accessed at a time to provide identifications of available receive buffers. Identifications of available receive buffers from the buffer queue are provide to the network interface, switch, or accelerator to store content of received packets.
Network interface connection teaming system
A network connection teaming system includes a processing system coupled to a memory system in an IHS chassis. The memory system is operable to receive instruction that, when executed by the processing system, cause the processing system to provide an operating system (OS). At least one network interface controller (NIC) including a plurality of network connections is located in the IHS chassis and coupled to the processing system. The NIC(s) are not directly visible to an OS provided to by the processing system. A NIC teaming controller is coupled between the processing system and the NIC(s). The NIC teaming controller includes a plurality of hardware connections that are configurable to team the plurality of network connections included on the NIC(s) to provide at least one teamed network connection. An OS provided by the processing system is presented the at least one teamed network connection by the NIC teaming controller.