Patent classifications
G06F2213/28
SYSTEM WITH CACHE-COHERENT MEMORY AND SERVER-LINKING SWITCH
A system and method for managing memory resources. In some embodiments the system includes a first server, a second server, and a server-linking switch connected to the first server and to the second server. The first server may include a stored-program processing circuit, a cache-coherent switch, and a first memory module. In some embodiments, the first memory module is connected to the cache-coherent switch, the cache-coherent switch is connected to the server-linking switch, and the stored-program processing circuit is connected to the cache-coherent switch.
DYNAMIC RANDOM ACCESS MEMORY (DRAM) COMPONENT FOR HIGH-PERFORMANCE, HIGH-CAPACITY REGISTERED MEMORY MODULES
The embodiments described herein describe technologies of dynamic random access memory (DRAM) components for high-performance, high-capacity registered memory modules, such as registered dual in-line memory modules (RDIMMs). One DRAM component may include a set of memory cells and steering logic. The steering logic may include a first data interface and a second data interface. The first and second data interfaces are selectively coupled to a controller component in a first mode and the first data interface is selectively coupled to the controller component in a second mode and the second data interface is selectively coupled to a second DRAM component in the second mode.
Common server san core solution to enable software defined storage
In an aspect of the disclosure, a method, a computer-readable medium, and a computer system are provided. The computer system includes a baseboard management controller (BMC). The BMC receives a first message from a first remote device on a management network. The BMC determines whether the first message is directed to a storage service or fabric service running on a host of the BMC. The host is a storage device. The BMC extracts a service management command from the first message, when the first message is directed to the storage service or fabric service. The BMC sends, through a BMC communication channel to the host, a second message containing the service management command to the host. The BMC communication channel established for communicating baseboard management commands between the BMC and the host.
METHOD FOR DETERMINING A RESET CAUSE OF AN EMBEDDED CONTROLLER FOR A VEHICLE AND AN EMBEDDED CONTROLLER FOR A VEHICLE TO WHICH THE METHOD IS APPLIED
A method for determining a reset cause of an embedded controller for a vehicle includes: executing an embedded controller for the vehicle by a central processing unit (CPU) of the embedded controller; generating a log based on information related to reset of the embedded controller collected from the running embedded controller, and a sequence number generated by the running embedded controller by the embedded controller; determining cause information of a reset trigger by analyzing the log including the reset trigger log, and determining reset cause information of the embedded controller based on the cause information of the reset trigger by a log analyzer of the embedded controller; storing a cause analysis result log including the reset cause information in a cause analysis result buffer in response to the reset cause information by the embedded controller; and storing the cause analysis result log in a non-volatile storage device.
System and method for multi-node communication
A method, computer program product, and computing system for coupling a multi-host remote direct memory access (RDMA) card to at least a pair of central processing units (CPUs). One or more signals may be routed, via the multi-host RDMA card, between the at least a pair of CPUs.
DATA COMMUNICATION SYSTEM, COMPUTER, DATA COMMUNICATION METHOD, AND PROGRAM
An object of the present disclosure is to provide a data communication system, a computing apparatus, a data communication method, and a program, which are capable of highly reliable data transfer with low latency between computing apparatuses. The present disclosure achieves a highly reliable communication path by directly connecting computing apparatuses via an optical path and transmitting data through the optical path. Further, the present disclosure uses the optical path to achieve RDAM over wavelength transmission in which existing RDMA-enabled protocol stacks such as InfiniBand and TCP/IP/Ether are eliminated. The present disclosure eliminates the protocol stacks, enabling transfer with lower latency than in a case of “simply performing RDMA transmission over the wavelength path”.
Direct memory access circuit, operation method thereof, and method of generating memory access command
A direct memory access (DMA) circuit, its operation method, and a method of generating memory access commands are provided. The DMA circuit is used to access a memory according to a command and includes a register, a first channel controller, and a second channel controller. The operation method of the DMA circuit includes the following steps: decoding the command to obtain a first channel code and a second channel code, the first channel code corresponding to the first channel controller, and the second channel code corresponding to the second channel controller; obtaining a state of the first channel controller from the register according to the first channel code; selecting the second channel controller according to the second channel code; and controlling the second channel controller according to the state.
ADDRESS TRANSLATION CACHE AND SYSTEM INCLUDING THE SAME
An address translation cache (ATC) is configured to store translation entries indicating mapping information between a virtual address and a physical address of a memory device. The ATC includes a plurality flexible page group caches, a shared cache and a cache manager. Each flexible page group cache stores translation entries corresponding to a page size allocated to the flexible group cache. The shared cache stores, regardless of page sizes, translation entries that are not stored in the plurality of flexible page group caches. The cache manager allocates a page size to each flexible page group cache, manages cache page information on the page sizes allocated to the plurality of flexible page group caches, and controls the plurality of flexible page group caches and the shared cache based on the cache page information.
DYNAMIC RANDOM ACCESS MEMORY (DRAM) COMPONENT FOR HIGH-PERFORMANCE, HIGH-CAPACITY REGISTERED MEMORY MODULES
The embodiments described herein describe technologies of dynamic random access memory (DRAM) components for high-performance, high-capacity registered memory modules, such as registered dual in-line memory modules (RDIMMs). One DRAM component may include a set of memory cells and steering logic. The steering logic may include a first data interface and a second data interface. The first and second data interfaces are selectively coupled to a controller component in a first mode and the first data interface is selectively coupled to the controller component in a second mode and the second data interface is selectively coupled to a second DRAM component in the second mode.
METHOD AND SYSTEM FOR COMMUNICATING DATA PACKETS IN REMOTE DIRECT MEMORY ACCESS NETWORKS
The present disclosure describes a method and a system for sending data packets to improve Quality of Service in Non-Volatile Memory express (NVMe) aware Remote Direct Memory Access (RDMA) network, including receiving, by a host RNIC, RDMA command from a host initiator, wherein the RDMA command comprises data packets, arranging, by the host RNIC, the data packets based on weights and priorities of RDMA queue pairs, storing, by the host RNIC, the data packets in a host queue from host RDMA queue pairs based on the weights and priorities of the RDMA queue pairs, and sending, by the host RNIC, the data packets through host virtual lanes to a target RNIC.