G06F2212/217

CACHE MANAGEMENT USING MULTIPLE CACHE MEMORIES AND FAVORED VOLUMES WITH MULTIPLE RESIDENCY TIME MULTIPLIERS

A method for demoting a selected storage element from a cache memory includes storing favored and non-favored storage elements within a higher performance portion and lower performance portion of the cache memory. The method maintains a plurality of favored LRU lists and a non-favored LRU list for the higher and lower performance portions of the cache memory. Each favored LRU list contains entries associated with the favored storage elements that have the same unique residency multiplier. The non-favored LRU list includes entries associated with the non-favored storage elements. The method demotes a selected favored or non-favored storage element from the higher and lower performance portions of the cache memory according to a cache demotion policy that provides a preference to favored storage elements over non-favored storage elements based on a computed cache life expectancy, residency time, and the unique residency multiplier. A corresponding storage controller and computer program product are also disclosed.

CACHE MANAGEMENT USING FAVORED VOLUMES AND A MULTIPLE TIERED CACHE MEMORY

A method for demoting a selected storage element from a cache memory includes storing favored and non-favored storage elements within a higher performance portion and lower performance portion of the cache memory. The favored storage elements are retained in the cache memory longer than the non-favored storage elements. The method maintains a first favored LRU list and a first non-favored LRU list, associated with the favored and non-favored storage elements stored within the higher performance portion of the cache. The method selects a favored or non-favored storage element to be demoted from the higher performance portion of the cache memory according to life expectancy and residency of the oldest favored and non-favored storage elements in the first LRU lists. The method demotes the selected from the higher performance portion of the cache to the lower performance portion of the cache, or to the data storage devices, according to a cache demotion policy. A corresponding storage controller and computer program product are also disclosed.

Predictive data orchestration in multi-tier memory systems

A computing system having memory components of different tiers. The computing system further includes a controller, operatively coupled between a processing device and the memory components, to: receive from the processing device first data access requests that cause first data movements across the tiers in the memory components; service the first data access requests after the first data movements; predict, by applying data usage information received from the processing device in a prediction model trained via machine learning, second data movements across the tiers in the memory components; and perform the second data movements before receiving second data access requests, where the second data movements reduce third data movements across the tiers caused by the second data access requests.

Cache management system and method

A method, computer program product, and computing system for receiving a plurality of data streams on an SSD cache memory system associated with a backend storage system and writing a first of the plurality of data streams to a first portion of the SSD cache memory system.

Biased sampling methodology for wear leveling
11341036 · 2022-05-24 · ·

A system includes a memory device and a processing device, coupled to the memory device. The processing device is to sample a first subset of data units from a set of data units of the memory device using a biased sampling process that increases a probability of sampling particular data units from the set of data units based on one or more characteristics associated with the particular data units. The processing device is to identify a first candidate data unit from the first subset of data units and perform a wear leveling operation in view of the first candidate data unit.

MEMORY SYSTEM FOR TAILORING DATA, HOST SYSTEM FOR CONTROLLING THE MEMORY SYSTEM, AND OPERATION METHOD OF THE MEMORY SYSTEM

A memory system may include a host system configured to split at least one data stream into a plurality of split data streams, grouping at least one unmergeable first data stream among the plurality of split data streams, and merging at least one mergeable second data stream among the plurality of split data streams, a storage device comprising one or more flash memory devices, the storage device including at least one first region and at least one second region, and processing circuitry configured to, receive at least one request to allocate at least one storage region for the tailored at least one data stream from the host system, store data blocks associated with the at least one first data stream in the first region, and store data blocks associated with the at least one second data stream in the second region.

Method and system for accelerating storage of data in write-intensive computer applications

A method of optimising a service rate of a buffer in a computer system having memory stores of first and second type is described. The method selectively services the buffer by routing data to each of the memory store of the first type and the second type based on read/write capacity of the memory store of the first type.

Real-time analysis for dynamic storage

One or more techniques and/or systems are provided for dynamically provisioning logical storage pools of storage devices for applications. For example, a logical storage pool, of one or more storage devices, may be constructed based upon a service level agreement for an application (e.g., an acceptable latency, an expected throughput, etc.). Real-time performance statistics of the logical storage pool may be collected and evaluated against the service level agreement to determine whether a storage device does not satisfy the service level agreement. For example, a latency of a storage device within the logical storage pool may increase overtime as log files and/or other data of the application increase. Accordingly, a new logical storage pool may be automatically and dynamically defined and provisioned for the application to replace the logical storage pool. The new logical storage pool may comprise storage devices expected to satisfy the storage level agreement.

Tier-optimized write scheme
11733871 · 2023-08-22 · ·

A request to write data corresponding to at least a first portion of a file is received. It is determined whether to perform the request either as an in-place write or as an out-of-place write. Performing the in-place write comprises performing a write to a low latency storage device, and performing the out-of-place write comprises performing a write to a higher latency storage device. The request is performed as either the in-place write or the out-of-place write based on the determination. Performing the request as the in-place write includes writing the data to a first location on a storage tier storing the first portion of the file, and performing the request as the out-of-place write includes writing the data to a second location on one of a plurality of storage tiers of a computing node, other than the first location.

Latency-based storage in a hybrid memory system

An example apparatus comprises a hybrid memory system and a controller coupled to the hybrid memory system. The controller may be configured to cause data to be selectively stored in the hybrid memory system responsive to a determination that an exception involving the data has occurred.