Patent classifications
G06F2212/312
ENHANCED DUPLICATE WRITE DATA TRACKING FOR CACHE MEMORY
Data is stored at a cache portion of a cache memory of a memory sub-system responsive to a request to perform a write operation to write the data. A duplicate copy of the data is stored at a write buffer portion of the cache memory. The cache memory is partitioned into the cache portion and the write buffer portion. An entry that maps a location of the duplicate copy of the data stored at the write buffer portion of the cache memory to a location of the data stored at the cache portion of the cache memory is recorded in a write buffer record.
Increased destaging efficiency for smoothing of destage tasks based on speed of disk drives
For increased destaging efficiency by smoothing destaging tasks to reduce long input/output (I/O) read operations in a computing environment, the ramp up of the destaging tasks is adjusted based on speed of disk drives when smoothing the destaging of storage tracks between a desired number of destaging tasks and a current number of destaging tasks by calculating destaging tasks according to one of a standard time interval and a variable recomputed destaging task interval.
Selection of an open block in solid state storage systems with multiple open blocks
An instruction to write data to a write logical address is received where the write logical address is a member of a group of one or more logical addresses. It is determined if data associated with any of the logical addresses in the group of logical addresses has been written to any of a plurality of open groups of locations. If so, the data is written to the open group of locations to which data from the group of logical addresses has already been written to. If not, an open group of locations to write to is selected from the plurality of open groups of locations.
MEMORY SYSTEM AND OPERATION METHOD THEREOF
A memory system may include a plurality of first and second memory devices each comprising M-bit multi-level cells (MLCs), M-bit multi-buffers, and transmission buffers, a cache memory suitable for caching data inputted to or outputted from the plurality of first and second memory devices, and a controller suitable for programming program data cached by the cache memory to a memory device selected among the first and second memory devices by transferring the program data to M-bit multi-buffers of the selected memory device whenever the program data are cached by M bits into the cache memory, and controlling the selected memory device to perform a necessary preparation operation, except for a secondary preparation operation, of a program preparation operation, until an input of the program data is ended or the M-bit multi-buffers of the selected memory device are full.
Memory system with first cache for storing uncompressed look-up table segments and second cache for storing compressed look-up table segments
A memory system is connectable to the host. The memory system includes a nonvolatile first memory, a second memory in which a plurality of pieces of first information each correlating a logical address indicating a location in a logical address space of the memory system with a physical address indicating a location in the first memory are stored, a volatile third memory including a first cache and a second cache, a compressor configured to perform compression on the plurality of pieces of first information, and a memory controller. The memory controller stores the first information not compressed by the compressor in the first cache, stores the first information compressed by the compressor in the second cache, and controls a ratio between a first capacity, which is a capacity of the first cache, and a second capacity, which is a capacity of the second cache.
HOST MEMORY PROTECTION VIA POWERED PERSISTENT STORE
A system includes a dynamic random-access memory (DRAM); and a storage device comprising a power source and a persistent store. The storage device is configured to provide reserve power to the DRAM. Data stored in the DRAM is transferred to a reserved storage in the persistent store of the storage device in a power loss event using the reserve power.
Data decompression using a construction area
For serving sequential read patterns from a compressed journal storage system, a construction area cache algorithm is used to temporarily store the read and decompressed data in a user view sequential order to minimize disk I/Os and CPU utilization while serving the data to the user.
Sizing a write cache buffer based on emergency data save parameters
Embodiments relate to saving data upon loss of power. An aspect includes sizing a write cache buffer based on parameters related to carrying out this emergency data save procedure. A computer implemented method for allocating a write cache on a storage controller includes retrieving, at run-time by a processor, one or more operating parameters of a component used in a power-loss save of the write cache. The component is selected from the group consisting of an energy storage element, a non-volatile memory, and a transfer logic. A size for the write cache on the storage controller is determined, based on the one or more operating parameters. A write cache, of the determined size, is allocated from a volatile memory coupled to the storage controller.
Read and Write Load Sharing in a Storage Array Via Partitioned Ownership of Data Blocks
A system shares I/O load between controllers in a high availability system. For writes, a controller determines based on one or more factors which controller will flush batches of data from write-back cache to better distribute the I/O burden. The determination occurs after the local storage controller caches the data, mirrors it, and confirms write complete to the host. Once it is determined which storage controller will flush the cache, the flush occurs and the corresponding metadata at a second layer of indirection is updated by that determined storage controller (whether or not it is identified as the owner of the corresponding volume to the host, while the volume owner updates metadata at a first layer of indirection). For a host read, the controller that owns the volume accesses the metadata from whichever controller(s) flushed the data previously and reads the data, regardless of which controller had performed the flush.
Fast write and management of persistent cache in a system that includes tertiary storage
Embodiments of the invention relate to receiving a write request that includes a write data and an address of a target block in tertiary storage. In response to the write request, a write-miss is detected at a cache located in persistent storage. Based on detecting the write-miss, the write data and associated metadata are written to a fast write storage location and the write request is marked as complete. In addition, the target block is retrieved from the address in the tertiary storage and stored in the cache. Contents of the fast write storage location are merged with the contents of the target block in the cache.