Patent classifications
G06F2212/311
Apparatus and method for managing meta data for engagement of plural memory system to store data
A system is used in a data processing system comprising at least one memory system which is operatively engaged and disengaged from a host or from another memory system and the host transmitting commands into the at least one memory system. The system includes a metadata generator configured to generate a map table for an available address range and a reallocation table for indicating an allocable address range in the map table; and a metadata controller configured to allocate the allocable address range to the at least one memory system when the at least one memory system is operatively engaged to the host or to another memory system, or release an allocated range for the at least one memory system such that the allocated range becomes the allocable address range when the at least one memory system is operatively disengaged from the host or the another memory system.
Content based cache failover
Methods and systems are disclosed for populating a fail-over cache. When host computer systems in a system each have a content based read cache, the methods and system provide several functions applied in different orders for determining blocks that are to be included in the fail-over cache. Each function attempts a different strategy for combining the contents of the caches of each host computer system into the fail-over cache. If any strategy is successful, then the fail-over cache is placed into service. If all of the strategies fail, then an eviction strategy is employed in which blocks are evicted from each cache until the combination of caches meets a requirement of the fail-over cache, which, in one embodiment, is the size of the fail-over cache.
Virtualized cache implementation method and physical machine
A virtualized cache implementation solution, where a memory of a virtual machine stores cache metadata. The cache metadata includes a one-to-one mapping relationship between virtual addresses and first physical addresses. After an operation request that is delivered by the virtual machine and that includes a first virtual address is obtained, when the cache metadata includes a target first physical address corresponding to the first virtual address, a target second physical address corresponding to the target first physical address is searched for based on preconfigured correspondences between the first physical addresses and second physical addresses, and data is read or written from or to a location indicated by the target second physical address.
PROVISIONING VIRTUAL MACHINES WITH A SINGLE IDENTITY AND CACHE VIRTUAL DISK
A virtual disk is provided to a computing environment. The virtual disk includes identity information to enable identification of a virtual machine within the computing environment. A size of the virtual disk is increased within the computing environment to enable the virtual disk to act as a storage for the identity information and as a cache of other system data to operate the virtual machine. The virtual machine is booted within the computing environment. The virtual machine is configured to at least access the virtual disk that includes both identity information and caches other system data to operate the virtual machine. Related apparatus, systems, techniques and articles are also described.
Host-Assisted Memory-Side Prefetcher
Methods, apparatuses, and techniques related to a host-assisted memory-side prefetcher are described herein. In general, prefetchers monitor the pattern of memory-address requests by a host device and use the pattern information to determine or predict future memory-address requests and fetch data associated with those predicted requests into a faster memory. In many cases, prefetchers that can make predictions with high performance use appreciable processing and computing resources, power, and cooling. Generally, however, producing a prefetching configuration that the prefetcher uses involves more resources than making predictions. The described host-assisted memory-side prefetcher uses the greater computing resources of the host device to produce at least an updated prefetching configuration. The memory-side prefetcher uses the prefetching configuration to predict the data to prefetch into the faster memory, which allows a higher-performance prefetcher to be implemented in the memory device with a reduced resource burden on the memory device.
Local cached data coherency in host devices using remote direct memory access
A first host device establishes connectivity to a logical storage device of a storage system. The first host device obtains from the storage system host connectivity information identifying at least a second host device that has also established connectivity to the logical storage device, caches one or more extents of the logical storage device in a memory of the first host device, and maintains local cache metadata in the first host device regarding the one or more extents of the logical storage device cached in the memory of the first host device. In conjunction with processing of a write operation of the first host device involving at least one of the one or more cached extents of the logical storage device, the first host device invalidates corresponding entries in the local cache metadata of the first host device and in local cache metadata maintained in the second host device.
MEMORY SYSTEM
A memory system includes: a normal memory area suitable for storing normal data; a security memory area suitable for storing security data; a first row hammer detection circuit suitable for sampling and counting a portion of rows that are activated in the normal memory area to select first rows that need to be refreshed; and a second row hammer detection circuit suitable for counting all rows that are activated in the security memory area to select second rows that need to be refreshed.
CACHE-BASED MEMORY READ COMMANDS
Various embodiments described herein provide for selectively sending a cache-based read command, such as a speculative read (SREAD) command in accordance with a Non-Volatile Dual In-Line Memory Module-P (NVDIMM-P) memory protocol, to a memory sub-system.
Predictive data orchestration in multi-tier memory systems
A computing system having memory components of different tiers. The computing system further includes a controller, operatively coupled between a processing device and the memory components, to: receive from the processing device first data access requests that cause first data movements across the tiers in the memory components; service the first data access requests after the first data movements; predict, by applying data usage information received from the processing device in a prediction model trained via machine learning, second data movements across the tiers in the memory components; and perform the second data movements before receiving second data access requests, where the second data movements reduce third data movements across the tiers caused by the second data access requests.
Word type/boundary propagation with memory performance applications
A method includes, for each data value in a set of one or more data values, determining a boundary between a high order portion of the data value and a low order portion of the data value, storing the low order portion at a first memory location utilizing a low data fidelity storage scheme, and storing the high order portion at a second memory location utilizing a high data fidelity storage scheme for recording data at a higher data fidelity than the low data fidelity storage scheme.