Patent classifications
H04L49/9063
GPU REMOTE COMMUNICATION WITH TRIGGERED OPERATIONS
Methods, devices, and systems for transmitting data over a computer communications network are disclosed. A queue of communications commands can be pre-generated using a central processing unit (CPU) and stored in a device memory of a network interface controller (NIC). Thereafter, if a graphics processing unit (GPU) has data to communicate to a remote GPU, it can store the data in a send buffer, where the location in the buffer is pointed to by a pre-generated command. The GPU can then signal to the interface device that the data is ready, triggering execution of the pre-generated command to send the data.
Efficient storage of sequentially transmitted packets in a network device
A sequence of packets is stored in a memory of the network device such that a current packet in the sequence of packets is stored at a predetermined distance following a preceding packet in the sequence. Lengths of corresponding ones of the packets stored in the memory are indicated in the memory. The packets are sequentially read from the memory based on the indicated lengths of corresponding ones of the packets stored in the memory. An operation is performed on the ones of the packets read from the memory. Subsequent to performing the operation, least some of the packets are written back to the memory. Ones of the packets are written to the memory beginning at a memory location following a respective preceding packet in the sequence by a predetermined distance.
System and method for centralized virtual interface card driver logging in a network environment
A method is provided in one example and includes creating a staging queue in a virtual interface card (VIC) adapter firmware of a server based on a log policy; receiving a log message from a VIC driver in the server; copying the log message to the staging queue; generating a VIC control message comprising the log message from the staging queue; and sending the VIC control message to a switch.
Packet validation in virtual network interface architecture
Roughly described, a network interface device receiving data packets from a computing device for transmission onto a network, the data packets having a certain characteristic, transmits the packet only if the sending queue has authority to send packets having that characteristic. The data packet characteristics can include transport protocol number, source and destination port numbers, source and destination IP addresses, for example. Authorizations can be programmed into the NIC by a kernel routine upon establishment of the transmit queue, based on the privilege level of the process for which the queue is being established. In this way, a user process can use an untrusted user-level protocol stack to initiate data transmission onto the network, while the NIC protects the remainder of the system or network from certain kinds of compromise.
SHARED MEMORY COMMUNICATION IN SOFTWARE DEFINED NETWORKING
A virtual switch executes on a computer system to forward packets to one or more destinations. A method of the disclosure includes receiving, by a virtual switch application being executed by a processing device, a packet comprising a header, determining, that the packet does not match a distribution table associated with the virtual switch and storing, by the processing device, the packet to a shared memory buffer that is accessible to a network controller application being executed by the processing device.
Cloud architecture with state-saving middlebox scaling
An enterprise computer system efficiently adjusts the number of middleboxes associated with the the enterprise, for example, with changes in demand, by transferring not only flows of instructions but also middlebox states associated with those flows. Loss-less transfer preventing the loss of packets and its state, and order-preserving transfer preserving packet ordering may be provided by a two-step transfer process in which packets are buffered during the transfer and are marked to be processed by a receiving middlebox before processing by that middlebox of ongoing packets for the given flow.
Apparatus and method for media access control scheduling with a priority calculation hardware coprocessor
An apparatus includes a Media Access Control (MAC) scheduler to generate a priority value calculation request with a specified formula and a list of metrics. A hardware based priority value calculation coprocessor services the priority value calculation request in accordance with the specified formula and the list of metrics.
Packet engine that uses PPI addressing
Within a networking device, packet portions from multiple PDRSDs (Packet Data Receiving and Splitting Devices) are loaded into a single memory, so that the packet portions can later be processed by a processing device. Rather than the PDRSDs managing and handling the storing of packet portions into the memory, a packet engine is provided. The PDRSDs use a PPI (Packet Portion Identifier) Addressing Mode (PAM) in communicating with the packet engine and in instructing the packet engine to store packet portions. The packet engine uses linear memory addressing to write the packet portions into the memory, and to read the packet portions from the memory.
Programmable logic applications for an array of high on/off ratio and high speed non-volatile memory cells
A non-volatile programmable circuit configurable to perform logic functions, is provided. The programmable circuit can employ two-terminal non-volatile memory devices to store information, thereby mitigating or avoiding disturbance of programmed data in the absence of external power. Two-terminal resistive switching memory devices having high current on/off ratios and fast switching times can also be employed for high performance, and facilitating a high density array. For look-up table applications, input/output response times can be several nanoseconds or less, facilitating much faster response times than a memory array access for retrieving stored data.
Multi-host Ethernet controller
Described herein is a system having a multi-host Ethernet controller (102) configured to provide communication and control between two or more independent host processors (104) and a network device. In one implementation, the multi host Ethernet controller (102), having an integrated L2 switch (110) to enable a plurality of independent host systems to access same physical gigabit network port concurrently. Each host processor (104) sees the controller as PCI based independent network controller and accesses the controller using its own mini-port driver. The common programming parameters such as Link Speed or Inter Packet Gap (IPG) are programmed by a virtualization engine. Packets from network (LAN) are switched based on MAC destination address and sent to corresponding host based on MAC destination address. Packets from each host processor (104) are forwarded to network interface or other host processor (104) based on MAC destination address.