G06F3/0685

DATA MOVEMENT INTIMATION USING INPUT/OUTPUT (I/O) QUEUE MANAGEMENT

A computer-implemented method according to one embodiment includes causing a plurality of I/O queues to be created between an initiator and a storage target device. The created I/O queues are reserved for I/O requests for which adjustments of current priorities of extents of data associated with the I/O requests are to be performed. The method further includes determining identifying information of an I/O request sent from the initiator to the storage target device and determining whether the I/O request was sent from the initiator to the storage target device using one of the created I/O queues. In response to a determination that the I/O request was sent using a first of the created I/O queues having one of the adjustments associated therewith, a tiering manager of the storage target device is instructed to perform the adjustment on the current priority of the extent of data associated with the I/O request.

SELECTIVELY SHEARING DATA WHEN MANIPULATING DATA DURING RECORD PROCESSING

A computer-implemented method, according to one embodiment, includes: storing records in an input data buffer, where each of the records include a key which is appended to payload data in the respective record. Moreover, for each of the records: shearing the key associated with the record from the payload data, normalizing the sheared key, and storing the normalized sheared key in a first target area of memory. A determination is made as to whether a size of the payload data in the record is outside a predetermine range, and in response to determining that the size of the payload data in the record is outside the predetermine range, the payload data is stored in a second target area of memory. A data locator is also appended to the normalized sheared key in the first target area of memory to form a sheared record.

MANAGING HIGH PERFORMANCE STORAGE SYSTEMS WITH HYBRID STORAGE TECHNOLOGIES

There is provided a method for managing a solid state storage system with hybrid storage technologies. The method includes monitoring one or more storage request streams to identify operating mode characteristics therein from among a set of possible operating mode characteristics. The set of possible operating mode characteristics correspond to a set of available operating modes of the hybrid storage technologies. The method further includes identifying a current operating mode from among the set of available operating modes responsive to the identified operating mode characteristics. The method also includes predicting a likely future operating mode responsive to variations in workload requirements to generate at least one future operating mode prediction. The method additionally includes controlling at least one of data placement, wear leveling, and garbage collection, responsive to the at least one future operating mode prediction.

DATA MIGRATION SCHEDULE PREDICTION USING MACHINE LEARNING
20230051103 · 2023-02-16 ·

Various embodiments provide for one or more processor instructions and memory instructions that enable a memory sub-system to predict a schedule for migrating data between memory devices, which can be part of a memory sub-system.

Object memory data flow triggers
11579774 · 2023-02-14 · ·

Embodiments of the invention provide systems and methods for managing processing, memory, storage, network, and cloud computing to significantly improve the efficiency and performance of processing nodes. More specifically, embodiments of the present invention are directed to an instruction set of an object memory fabric. This object memory fabric instruction set can include trigger instructions defined in metadata for a particular memory object. Each trigger instruction can comprise a single instruction and action based on reference to a specific object to initiate or perform defined actions such as pre-fetching other objects or executing a trigger program.

Storage system and method of operating the same

A storage system includes a storage device and a host device. The storage device includes a nonvolatile memory device having a first size and a first volatile memory device having a second size smaller than the first size and configured to operate as a cache memory with respect to the nonvolatile memory device. The first volatile memory device is configured to allow a first bus portion access to cache data stored in the first volatile memory device. The host device is configured to generate a cache table corresponding to information in the cache data stored in the first volatile memory device and configured to read the cache data stored in the first volatile memory device via the first bus portion based on the cache table.

Garbage collection for a deduplicated cloud tier using functions

Systems and methods for performing data protection operations including garbage collection operations and copy forward operations. For deduplicated data stored in a cloud-based storage or in a cloud tier that stores containers containing dead and live segments or dead and live regions such as compression regions, the dead compression regions are deleted by copying the live compression regions into new containers and then deleting the old containers. The copy forward is based on a recipe from a data protection system and is performed using a serverless approach.

USING DRIVE COMPRESSION IN UNCOMPRESSED TIER
20230009942 · 2023-01-12 · ·

In a storage system such as a SAN, NAS, or storage array that implements hierarchical performance tiers based rated drive access latency, on-drive compression is used on data stored on a first tier and off-drive compression is used on data stored on a second tier. Off-drive compression is more processor intensive and may introduce some data access latency but reduces storage requirements. On-drive compression is performed at or near line speed but generally yields lower size reduction ratios than off-drive compression. On-drive compression may be implemented at a higher performance tier whereas off-drive compression may be implemented at a lower performance tier. Further, space saving realized from on-drive compression may be applied to over-provisioning.

PMEM cache RDMA security

Techniques are described for providing one or more clients with direct access to cached data blocks within a persistent memory cache on a storage server. In an embodiment, a storage server maintains a persistent memory cache comprising a plurality of cache lines, each of which represent an allocation unit of block-based storage. The storage server maintains an RDMA table that include a plurality of table entries, each of which maps a respective client to one or more cache lines and a remote access key. An RDMA access request to access a particular cache line is received from a storage server client. The storage server identifies access credentials for the client and determines whether the client has permission to perform the RDMA access on the particular cache line. Upon determining that the client has permissions, the cache line is accessed from the persistent memory cache and sent to the storage server client.

Utilizing a hybrid tier which mixes solid state device storage and hard disk drive storage

A technique manages data within a storage array. The technique involves forming a hybrid tier within the storage array, the hybrid tier including SSD storage and HDD storage. The technique further involves, after the hybrid tier is formed, providing hybrid ubers (or Redundant Array of Independent Disks (RAID) extents) from the SSD storage and the HDD storage of the hybrid tier. The technique further involves, after the hybrid ubers are provided, accessing the hybrid ubers to perform data storage operations.