Patent classifications
G06F2213/3808
ADVANCED CENTRALIZED CHRONOS NoC
System and methods for an Advance Centralized Chronos Network on Chip (ACC-NoC) design are disclosed. The ACC-NoC is able to efficiently satisfy interconnect traffic requirements of modern Systems of Chip and simplify top level timing closure while providing high throughput and low latency. The ACC-NoC in a System on Chip may include a centralized intelligent switch and arbitration engine communicatively coupled to different intellectual property (IP) blocks through series of one or more Chronos Channels which transmit data using delay insensitive (DI) codes and quasi-delay-insensitive (QDI) logic.
SYSTEMS AND METHODS FOR SMART NETWORK INTERFACE CARD-INITIATED SERVER MANAGEMENT
An information handling system may include a processor, a management controller communicatively coupled to the processor and configured for out-of-band management of the information handling system, and a smart network interface card communicatively coupled to the processor and the management controller, and configured to obtain a secret for authenticating the smart network interface card to the management controller, request an access token reference from the management controller, the request including the secret and an identifier of the smart network interface card in order to authenticate the smart network interface card to the management controller, in response to the request for the access token reference, receive the access token reference, and communicate a management task request to the management controller using the access token reference.
MANAGING FAILOVER BETWEEN INFORMATION HANDLING SYSTEMS
Managing failover between information handling systems, including receiving, at an interface of a smart network interface card (smartNIC) of a primary information handling system, a packet, the primary information handling system communicatively couped to a secondary information handling system; determining whether the packet was transmitted by a network interface card (NIC) of the secondary information handling system; determining that the packet was transmitted by the NIC of the secondary information handling system, and in response, determining whether the packet is an address resolution protocol (ARP) request; determining that the packet is an ARP request, and in response, cloning a medium access control (MAC) address of the NIC of the secondary information handling system at the smartNIC of the primary information handling system; and forwarding the ARP request to a uplink connection.
Independent central processing unit (CPU) networking using an intermediate device
A computer device includes a central processing unit (CPU), a network adapter, a bus, and an intermediate device, where the intermediate device is coupled to both the CPU and the network adapter through the bus, and is configured to establish a correspondence between address information of an agent unit and address information of a function unit, and implement forwarding of a packet between the CPU and the network adapter based on the correspondence.
ELECTRONIC SYSTEM AND RELATED METHOD FOR PROVIDING MULTIPLE HOSTS WITH NETWORK CONNECTIVITY AND REMOTE WAKE-UP
An electronic system includes a display device, a first host and a second host. The display device includes a network interface port for connecting to an external network, a first port, a second port, a control unit for recording information associated with a designated network bridge target, and a hub unit for controlling the signal transmission between the network interface port, the first port, the second port and the control unit. The first host is coupled to the first port and configured to activate network bridge function when set as the designated network bridge target, thereby connecting to the external network. The second host is coupled to the second port and configured to connect to the external network using the network bridge function via the second port, the hub unit and the first port.
Direct response to IO request in storage system having an intermediary target apparatus
An apparatus comprises at least one processing device comprising a processor coupled to memory. The at least one processing device is configured to obtain an input-output request issued by an application executing on a compute node via at least one network and to identify a storage node as corresponding to the obtained input-output request based at least in part on the obtained input-output request. The at least one processing device is configured to associate information corresponding to the compute node with the input-output request and to submit the input-output request and the associated information that corresponds to the compute node to the storage node via the at least one network. The storage node is configured to submit a response to the input-output request to the compute node via the at least one network based at least in part on the information.
Metadata Processing Method in Storage Device, and Related Device
A metadata processing method includes a network interface card in a storage device that receives an input/output (I/O) request, where the I/O request includes a data read request or a data write request; the network interface card executes a metadata processing task corresponding to the I/O request; and when determining that the metadata processing task fails to be executed, the network interface card requests a CPU in the storage device to execute the metadata processing task.
DATA TRANSMISSION METHOD, CHIP, AND DEVICE
A data transmission method is provided. The method includes: a network interface card of a source device obtains a first notification message and a second notification message, wherein the first notification message indicates that a first to-be-processed remote direct memory access (RDMA) request exists in a first queue of the source device, the first queue stores a request of a first service application in the source device, the second notification message indicates that a second to-be-processed RDMA request exists in a second queue of the source device, and the second queue stores a request of a second service application in the source device; and the network interface card determines a processing sequence of the first queue and the second queue based on service levels, and sends the first to-be-processed RDMA request and the second to-be-processed RDMA request to a destination device according to the processing sequence.
Network architecture providing high speed storage access through a PCI express fabric between a compute node and a storage server within an array of compute nodes
A network architecture including a streaming array that includes a plurality of compute sleds, wherein each compute sled includes one or more compute nodes. The network architecture including a network storage of the streaming array. The network architecture including a PCIe fabric of the streaming array configured to provide direct access to the network storage from a plurality of compute nodes of the streaming array. The PCIe fabric including one or more array-level PCIe switches, wherein each array-level PCIe switch is communicatively coupled to corresponding compute nodes of corresponding compute sleds and communicatively coupled to the network storage. The network storage is shared by the plurality of compute nodes of the streaming array.
STATE SHARING BETWEEN SMART NICS
Some embodiments provide a method for synchronizing state between multiple smart NICs of a host computer that perform operations using dynamic state information. At a first smart NIC of the plurality of smart NICs, the method stores a set of dynamic state information. The method synchronizes the set of dynamic state information across a communication channel that connects the smart NICs so that each of the smart NICs also stores the set of dynamic state information.