Patent classifications
G06F2212/206
METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR MANAGING ADDRESS MAPPING IN STORAGE SYSTEM
The present disclosure relates to a method, device and computer program product for managing an address mapping of a storage system. A group of data objects in the storage system are mapped to a group of buckets in the address mapping, the group of buckets being divided into a first group of active shards which are associated with a group of storage devices in the storage system, respectively. In the method, a first write request for writing a first data object to the storage system is received. The address mapping is updated so as to map the first data object to a first bucket in the group of buckets. The storage system is instructed to store the first data object to a first storage device in the group of storage devices, and the first storage device is associated with a first active shard to which the first bucket belongs. The storage system is managed based on the updated address mapping. With the above example implementation, the address mapping in the storage system may be managed with higher efficiency, and further the overall response speed of the storage system may be improved. There is also provided a corresponding device and computer program product.
DYNAMICALLY ADJUSTING A NUMBER OF MEMORY COPY AND MEMORY MAPPING WINDOWS TO OPTIMIZE I/O PERFORMANCE
A method to dynamically optimize utilization of data transfer techniques includes processing multiple I/O requests using one of several data transfer techniques depending on which data transfer technique is more efficient. The data transfer techniques include: a memory copy data transfer technique that copies cache segments associated with an I/O request from a cache memory to a permanently mapped memory; and a memory mapping data transfer technique that temporarily maps cache segments associated with an I/O request. In order to process the I/O requests, the method utilizes a first number of copy windows associated with the memory copy data transfer technique, and a second number of mapping windows associated with the memory mapping data transfer technique. The method dynamically adjusts one or more of the first number and the second number to optimize the processing of the I/O requests. A corresponding system and computer program product are also disclosed.
Computing device having two trusted platform modules
A computing device is provided including a motherboard including a control module, a first trusted platform module (TPM), and a second TPM. The control module directs security operations to the first TPM, wherein the control module is operable to detect whether or not the first TPM is damaged, and wherein the control module, in response to detecting that the first TPM is damaged, is operable to direct subsequent security operations to be performed by the second TPM. A computer program product is also provided including non-transitory computer readable storage media embodying program instructions executable by a processor to direct security operations to a first TPM coupled to a motherboard of the computing device, detect whether or not the first TPM is damaged, and, responsive to detecting that the first TPM is damaged, direct subsequent security operations to a second TPM coupled to the motherboard of the computing device.
Key value store snapshot in a distributed memory object architecture
Disclosed herein is an apparatus and method for a key value store snapshot for a distributed memory object system. In one embodiment, a method includes forming a system cluster comprising a plurality of nodes, wherein each node includes a memory, a processor and a network interface to send and receive messages and data; creating a plurality of sharable memory spaces having partitioned data, wherein each space is a distributed memory object having a compute node, wherein the sharable memory spaces are at least one of persistent memory or DRAM cache; storing data in persistent memory, the data having a generation tag created from a generation counter and a doubly linked list having a current view and a snapshot view, the data further being stored in either a root or a persisted row; creating a snapshot comprising a consistent point-in-time view of key value contents within a node and incrementing the generation counter; copying the snapshot to a second node; regenerating an index for the key value contents within the node; and logging updates since the snap was applied to update copied data in the second node.
Purgeable memory mapped files
A device implementing purgeable memory mapped files includes at least one processor configured to receive a first request to store a first data object in volatile memory in association with a copy of the first data object stored in non-volatile memory, the first request indicating to lock the copy in the non-volatile memory. The processor is further configured to provide for storing the first data object in the volatile memory, and lock the copy stored in the non-volatile memory. The processor is further configured to receive a second request associated with clearing a portion of the non-volatile memory, provide an indication that a second data object is available for deletion from the non-volatile memory when the first data object is locked, and provide an indication that the first data object is available for deletion from the non-volatile memory when the first data object has been unlocked.
Methods and apparatus for host register access for data storage controllers for ongoing standards compliance
The present disclosure describes technologies and techniques for use by a data storage controller (such as a non-volatile memory (NVM) controller) to emulate hardware registers in software and/or firmware. In illustrative examples, an arbiter of the NVM controller arbitrates inbound register accesses from a host. Regular register accesses from the host are directed by the arbiter along a regular hardware read path within the NVM controller to physical memory registers. Other transactions (e.g. writes to, or reads from, an undefined or reserved address space) are instead directed by the arbiter to the processor of the NVM controller for handling by software running on the processor (and/or by firmware). In this manner, new hardware registers added to an NVM standard may be implemented by software and/or firmware rather than hardware. Hence, the NVM controller may comply with new standard requirements without changing the hardware. NVMe examples are provided.
DIRTY DATA TRACKING IN PERSISTENT MEMORY SYSTEMS
An example method of managing persistent memory (PM) in a computing system includes: issuing, by an application executing in the computing system, store instructions to an address space of the application, the address space including a region mapped to the PM; recording, by a central processing unit (CPU) in the computing system, cache line addresses in a log, the cache line addresses corresponding to cache lines in the address space of the application targeted by the store instructions; and issuing, by the application, one or more instructions to flush cache lines from cache of the CPU identified by the cache line addresses in the log.
APPARATUS AND METHOD FOR TRANSMITTING MAP DATA IN MEMORY SYSTEM
An operation method of a memory system includes: searching for a valid physical address in memory map segments stored in the memory system, based on a read request from a host, a logical address corresponding to the read requests, and a physical address corresponding to the logical address and performing a read operation corresponding to the read request; caching some of the memory map segments in the host as host map segments based on a read count threshold indicating the number of receptions of the read request for the logical address; and adjusting the read count threshold based on a miss count indicating the number of receptions of the read request with no physical address, and a provision count indicating the number of times the memory map segment is cached in the host.
Cost-effective deployments of a PMEM-based DMO system
Disclosed herein is a persistent memory (PMEM)-based distributed memory object system, referred to as the PMEM DMO system, that provides affordable means of integrating low-latency PMEM spaces with other devices, including servers that do not directly support PMEM. One embodiment comprises providing a cluster of servers with PMEM storage (PMEM servers) and connecting the PMEM servers to a plurality of applications servers using a low-latency network, such as a remote direct memory access; background processes on each of the application servers are tasked to perform input/output operations for the application servers to locally materialize objects from and synchronize/persist objects to the remote PMEM spaces on the PMEM servers. Data materialized from the PMEM servers is stored to the local cache of the application server for use. Also disclosed are data eviction policies for clearing the local cache of the application servers to make space for new data read.
Storage device buffer in system memory space
An information handling system may include a resistive memory buffer to supplement a system main memory unit of the information handling system. A processor of the information handling system may map the resistive memory buffer as system memory, along with the system main memory unit. The processor may use the system memory, including the resistive memory buffer and the system main memory unit in executing one or more applications. The resistive memory buffer may improve performance of the information handling system, such as during hibernation and wake-up processes and memory flush processes.