G06F2212/224

Multi-speed data storage device with media cache for high speed writes

Apparatus and method for managing data transfers in a data storage device with rotational media that can be rotated at different speeds. In some embodiments, a non-volatile main memory is formed on a rotatable medium accessed by a moveable data transducer. A media cache provides a non-volatile data storage area. A control circuit directs writes to the main memory as the medium is rotated at a first speed and directs reads from the main memory as the medium is rotated at a higher, second speed. Writes during the rotation of the medium at the second speed are directed to the media cache instead of to the main memory so that no data are written to the main memory at the second speed. The media cache may also be located on the medium or may be formed from solid-state semiconductor memory.

Systems and Methods for Rapid Processing and Storage of Data
20190171560 · 2019-06-06 · ·

Systems and methods of building massively parallel computing systems using low power computing complexes in accordance with embodiments of the invention are disclosed. A massively parallel computing system in accordance with one embodiment of the invention includes at least one Solid State Blade configured to communicate via a high performance network fabric. In addition, each Solid State Blade includes a processor configured to communicate with a plurality of low power computing complexes interconnected by a router, and each low power computing complex includes at least one general processing core, an accelerator, an I/O interface, and cache memory and is configured to communicate with non-volatile solid state memory.

Considering a density of tracks to destage in groups of tracks to select groups of tracks to destage

Provided are a computer program product, system, and method for considering a density of tracks to destage in groups of tracks to select groups of tracks to destage. Groups of tracks in the cache are scanned to determine whether they are ready to destage. A determination is made as to whether the tracks in one of the groups are ready to destage in response to scanning the tracks in the group. A density for the group is increased in response to determining that the group is not ready to destage. The group is destaged in response to determining that the density of the group exceeds a density threshold.

CACHE TRANSFER TIME MITIGATION

In accordance with one implementation, a method for mitigating cache transfer time entails reading data into memory from at least two consecutive elliptical data tracks in a main store region of data storage and writing the data read from the at least two consecutive elliptical data tracks to a spiral data track within a cache storage region.

DYNAMIC PREMIGRATION THROTTLING FOR TIERED STORAGE
20190087342 · 2019-03-21 ·

A dynamic premigration protocol is implemented in response to a secondary tier returning to an operational state and an amount of data associated with a premigration queue of a primary tier exceeding a first threshold. The dynamic premigration protocol can comprise at least a temporary premigration throttling level. An original premigration protocol is implemented in response to an amount of data associated with the premigration queue decreasing below the first threshold.

Last writers of datasets in storage array errors

Examples discussed herein are directed to last writers of datasets in storage array errors. In some examples, a dataset integrity error detection is recorded. The dataset integrity error may be in a write path of a storage array and the write path may include a first controller node and a second controller node of the storage array. A detector of the dataset integrity error may be determined. A last writer of the dataset in the write path prior to the dataset integrity error detection may also be determined. A processing location in the write path associated with the dataset integrity error may be determined.

DISTRIBUTED SAFE DATA COMMIT IN A DATA STORAGE SYSTEM

In one embodiment, a safe data commit process manages the allocation of task control blocks (TCBs) as a function of the type of task control block (TCB) to be allocated for destaging and as a function of the identity of the RAID storage rank to which the data is being destaged. For example, the allocation of background TCBs is prioritized over the allocation of foreground TCBs for destage operations. In addition, the number of background TCBs allocated to any one RAID storage rank is limited. Once the limit of background TCBs for a particular RAID storage rank is reached, the distributed safe data commit logic switches to allocating foreground TCBs. Further, the number of foreground TCBs allocated to any one RAID storage rank is also limited. Other features and aspects may be realized, depending upon the particular application.

SPLIT HEAD INVALIDATION FOR CONSUMER BATCHING IN POINTER RINGS
20190065371 · 2019-02-28 ·

A split head invalidation system includes a first memory including a ring buffer, a second memory, and a processor in communication with the first memory. The processor includes a consumer processor and a producer processor. The consumer processor is configured to maintain a head and tail pointer, detect a request to copy a memory entry from the ring buffer, and consume the memory entry. Consuming the memory entry includes iteratively testing a value associated with the memory entry in a slot indicated by the head pointer, retrieving the respective memory entry from the slot, and advancing the head pointer to the next slot until reaching a threshold quantity of slots. Additionally, the consumer processor is configured to invalidate each slot from the head pointer to the tail pointer after reaching the threshold quantity.

Identification of blocks to be retained in a cache based on temperature

A storage device made up of multiple storage media is configured such that one such media serves as a cache for data stored on another of such media. The device includes a controller configured to manage the cache by consolidating information concerning obsolete data stored in the cache with information concerning data no longer desired to be stored in the cache, and erase segments of the cache containing one or more of the blocks of obsolete data and the blocks of data that are no longer desired to be stored in the cache to produce reclaimed segments of the cache.

Storage system and a method used by the storage system

Performing failover processing between a production host and a backup host, a storage system is connected to the production host and the backup host. In response to a failure of the production host, metadata is obtained of data blocks that have been cached from an elastic space located in a fast disk of the storage system. A storage capacity of the elastic space is expanded. Data blocks are obtained to which the metadata corresponds according to the metadata and the storage capacity of the expanded elastic space, and storing the same in the expanded elastic space. In response the backup host requesting the data blocks to which the metadata corresponds, and the data blocks to which the metadata corresponds have already been stored in the expanded elastic space, data blocks are obtained to which the metadata corresponds from the expanded elastic space and transmitting the same to the backup host.