G06F3/0646

Data storage arrangement
10235369 · 2019-03-19 · ·

A computer arrangement includes a plurality of cluster systems each cluster system configured to archive data from at least one data processing installation, wherein each of the plurality of cluster systems is of modular design and includes at least one first component computer that receives data to be archived from the data processing installation, at least one mass memory system that buffer-stores the data to be archived, a second component computer that backs up the data to be archived on at least one further mass memory apparatus, and a cluster controller that controls the individual component computers of the respective cluster system; at least one data connection for data-oriented coupling of the plurality of cluster systems; and at least one composite controller that queries status data via a query interface of the cluster controllers of the plurality of cluster systems and transmits work orders to a control interface of the cluster controllers of the plurality of cluster systems.

Fast resynchronization of a mirrored aggregate using disk-level cloning

Systems and methods for performing a fast resynchronization of a mirrored aggregate of a distributed storage system using disk-level cloning are provided. According to one embodiment, responsive to a failure of a disk of a plex of the mirrored aggregate utilized by a high-availability (HA) pair of nodes of a distributed storage system, disk-level clones of the disks of the healthy plex may be created external to the distributed storage system and attached to the degraded HA partner node. After detection of the cloned disks by the degraded HA partner node, mirror protection may be efficiently re-established by assimilating the cloned disks within the failed plex and then resynchronizing the mirrored aggregate by performing a level-1 resync of the failed plex with the healthy plex based on a base file system snapshot of the healthy plex. In this manner, a more time-consuming level-0 resync may be avoided.

Methods and apparatus for peer-to-peer data channels for storage devices

A method may include transferring data between a host and a first storage device through a first storage interface, transferring data between the host and a second storage device through a second storage interface, and transferring data between the first storage device and the second storage device through a peer-to-peer channel. A storage system may include a host interface, a first storage device having a first storage interface coupled to the host interface, a second storage device having a second storage interface coupled to the host interface, and a peer-to-peer bus coupled between the first and second storage devices. A storage device may include a storage medium, a storage device controller coupled to the storage medium, a storage interface coupled to the storage device controller, and a peer-to-peer interface coupled to the storage device controller.

MEMORY MAPPING FOR MEMORY, MEMORY MODULES, AND NON-VOLATILE MEMORY
20240231824 · 2024-07-11 ·

Apparatuses and methods related to commands to transfer data and/or perform logic operations are described. For example, a command that identifies a location of data and a target for transferring the data may be issued to a memory device. Or a command that identifies a location of data and one or more logic operations to be performed on that data may be issued to a memory device. A memory module may include different memory arrays (e.g., different technology types), and a command may identify data to be transferred between arrays or between controllers for the arrays. Commands may include targets for data expressed in or indicative of channels associated with the arrays, and data may be transferred between channels or between memory devices that share a channel, or both. Some commands may identify data, a target for the data, and a logic operation for the data.

ACCUMULATED DATA TRANSFER AMOUNT ACCESS

Systems and methods of initiating a transfer of a quantity of data are described. An accumulated data transfer amount for a periodic data transfer scheduled to be pushed to a primary storage location at a future date is determined. A pull transfer of the accumulated data transfer amount to a secondary storage location is then initiated. An overall data amount in the secondary storage location is then determined. The overall data amount includes the accumulated data transfer amount less a data transfer loss amount and a data processing yield amount. The data processing yield amount is based upon the accumulated data transfer amount. A transfer of a quantity of data then is initiated from the secondary storage location to the primary storage location. The quantity of data comprises at least a portion of the overall data amount.

Balance workloads on nodes based on estimated optimal performance capacity
12050938 · 2024-07-30 · ·

Systems, methods, and machine-readable media for monitoring a storage system and correcting demand imbalances among nodes in a cluster are disclosed. A performance manager for the storage system may detect performance imbalances that occur over a period of time. When operating below an optimal performance capacity, the manager may cause a volume to be moved from a node with a high load to a node with a lower load to achieve a preventive result. When operating at or near optimal performance capacity, the manager may cause a QOS limit to be imposed to prevent the workload from exceeding the performance capacity, to achieve a proactive result. When operating abnormally, the manager may cause a QOS limit to be imposed to throttle the workload to bring the node back within the optimal performance capacity of the node, to achieve a reactive result. These actions may be performed independently, or in cooperation.

Distributed object storage system comprising performance optimizations

A distributed object storage system comprises an encoding module configured to calculate for a plurality of predetermined values of the spreading requirement the cumulative size of the sub fragment files when stored on the file system with the predetermined block size; and select as a spreading requirement from said plurality of predetermined values a calculated value that is equal to one of said predetermined values for which the cumulative size is minimal.

Block storage using a hybrid memory device

Techniques for block storage using a hybrid memory device are described. In at least some embodiments, a hybrid memory device includes a volatile memory portion, such as dynamic random access memory (DRAM). The hybrid memory device further includes non-volatile memory portion, such as flash memory. In at least some embodiments, the hybrid memory device can be embodied as a non-volatile dual in-line memory module, or NVDIMM. Techniques discussed herein employ various functionalities to enable the hybrid memory device to be exposed to various entities as an available block storage device.

Method and system for storing and recovering data from flash memory

Embodiments of the technology relate to storing user data and metadata in persistent storage in the event of a power failure and then recovering such stored data and metadata when power is restored.

Committing copy-on-write transaction with a persist barrier for a persistent object including payload references

Systems implementing copy-on-write (COW) as described herein may reduce the number of persist barriers executed within a transaction. For instance, a system may eliminate some, most or all persist barriers related to memory allocation/deallocation in COW transactions. A COW implementation may introduce an extra level of indirection between a persistent type instance and the real data type it encloses. A persistent type may include pointers to old and new versions of the enclosed type's instances. Before modifying an object, a transaction may modify a copy-on-write persistent object and create a new copy of the payload. The modified object may be added to a list of objects written to by the transaction. The transaction may be committed by issuing persist barriers in the commit operation.