G06F2212/224

Methods and systems for directly mapping a backend block address into a physical address of a caching device

A storage device made up of multiple storage media is configured such that one such media serves as a cache for data stored on another of such media. The device includes a controller configured to manage the cache by consolidating information concerning obsolete data stored in the cache with information concerning data no longer desired to be stored in the cache, and erase segments of the cache containing one or more of the blocks of obsolete data and the blocks of data that are no longer desired to be stored in the cache to produce reclaimed segments of the cache.

LAST WRITERS OF DATASETS IN STORAGE ARRAY ERRORS
20180032397 · 2018-02-01 ·

Examples discussed herein are directed to last writers of datasets in storage array errors. In some examples, a dataset integrity error detection is recorded. The dataset integrity error may be in a write path of a storage array and the write path may include a first controller node and a second controller node of the storage array. A detector of the dataset integrity error may be determined. A last writer of the dataset in the write path prior to the dataset integrity error detection may also be determined. A processing location in the write path associated with the dataset integrity error may be determined.

Multi-level snapshot caching
09875184 · 2018-01-23 · ·

A method for processing a read request comprises intercepting a read request that includes a logical block address (LBA) of the storage device by an IO filter driver and retrieving a disk identifier (ID) associated with the LBA from a metadata file associated with the storage device. The method further comprises sending the LBA and the disk ID to a daemon configured to read and write to a cache. If the daemon returns cached data associated with the LBA and the disk ID, the method returns the cached data in response to the read request. If the daemon does not return cached data associated with the LBA and the disk ID, the method transmits the read request to the storage device.

Caching Network Fabric for High Performance Computing
20180020054 · 2018-01-18 ·

An apparatus and method exchange data between two nodes of a high performance computing (HPC) system using a data communication link. The apparatus has one or more processing cores, RDMA engines, cache coherence engines, and multiplexers. The multiplexers may be programmed by a user application, for example through an API, to selectively couple either the RDMA engines, cache coherence engines, or a mix of these to the data communication link. Bulk data transfer to the nodes of the HPC system may be performed using paged RDMA during initialization. Then, during computation proper, random access to remote data may be performed using a coherence protocol (e.g. MESI) that operates on much smaller cache lines.

METHODS AND SYSTEMS FOR DIRECTLY MAPPING A BACKEND BLOCK ADDRESS INTO A PHYSICAL ADDRESS OF A CACHING DEVICE
20170270049 · 2017-09-21 ·

A storage device made up of multiple storage media is configured such that one such media serves as a cache for data stored on another of such media. The device includes a controller configured to manage the cache by consolidating information concerning obsolete data stored in the cache with information concerning data no longer desired to be stored in the cache, and erase segments of the cache containing one or more of the blocks of obsolete data and the blocks of data that are no longer desired to be stored in the cache to produce reclaimed segments of the cache.

Using unused portion of the storage space of physical storage devices configured as a RAID

Physical storage devices are configured as a redundant array of independent disks (RAID). As such, storage space of the physical storage devices is allocated to the RAID, and each physical storage device is part of the RAID. Where a portion of the storage space of the physical storage devices is not allocated to the RAID, this portion of the storage space is configured so that it is usable and is not wasted.

Methods and systems for erasing a segment from a flash cache
09697133 · 2017-07-04 · ·

A storage device made up of multiple storage media is configured such that one such media serves as a cache for data stored on another of such media. The device includes a controller configured to manage the cache by consolidating information concerning obsolete data stored in the cache with information concerning data no longer desired to be stored in the cache, and erase segments of the cache containing one or more of the blocks of obsolete data and the blocks of data that are no longer desired to be stored in the cache to produce reclaimed segments of the cache.

WRITE DATA REQUEST PROCESSING SYSTEM AND METHOD IN A STORAGE ARRAY

According to a write data request processing method and a storage array provided in the embodiments of the present invention, a controller is connected to a cache device via a switching device, an input/output manager is connected to the controller via the switching device, and the input/output manager is connected to a cache device via the switching device. The controller obtains a cache address from the cache device for to-be-written data according to the write data request, the controller sends an identifier of the cache device and the cache address to the input/output manager via the switching device, and the input/output manager writes the to-be-written data to the cache address via the switching device.

ACCELERATED COMPUTER SYSTEM AND METHOD FOR WRITING DATA INTO DISCRETE PAGES
20170168731 · 2017-06-15 ·

The instant disclosure provides an accelerated computer system and an accelerated method for writing data into discrete pages. The accelerated method includes executing write commands, with each write command including write data and a write address such that the write address corresponds to a write page of the first pages in a block of a hard drive, identifying whether the write pages are successive according to the write addresses, acquiring stored data by reading the block according to the write addresses if the write pages are discrete, writing the data stored in the first pages into the second pages of a memory, writing write data bit by bit into the second pages according to the write addresses, and writing the data stored in the second pages into the first pages.

TAPE-MANAGED PARTITION SUPPORT FOR EFFECTIVE WORKLOAD ALLOCATION AND SPACE MANAGEMENT

In one embodiment, a system includes a disk cache that includes a plurality of hard disk drives (HDDs) and a controller. The controller is configured to create one or more tape-managed partitions in the disk cache, each of the one or more tape-managed partitions being configured to store data that is subject to hierarchical storage management (HSM). The controller is also configured to create a premigration queue configured to service premigration data for all of the one or more tape-managed partitions. Moreover, the controller is configured to receive a premigration delay value for a first tape-managed partition, the premigration delay value defining a time period that elapses prior to queuing the premigration data for the first tape-managed partition to the premigration queue. The premigration delay value is based on a volume creation time. Other systems, methods, and computer program products are described in accordance with more embodiments.