G06F11/2064

Scalable non-uniform storage sizes

A plurality of storage nodes cooperating as a storage cluster is provided. Each of the plurality of storage nodes has storage memory. Each storage node of the plurality of storage nodes is configurable to direct erasure coded striping of data of one of an Mode or data segment across the plurality of storage nodes of the storage cluster, with at least one storage node of the plurality of storage nodes having a differing amount of storage capacity of the storage memory from an amount of storage capacity of another storage node in the plurality of storage nodes. A method of storing data in a storage cluster is also provided.

GRANULAR CONSISTENCY GROUP REPLICATION
20170249222 · 2017-08-31 ·

One or more techniques and/or computing devices are provided for granular replication for data protection. For example, a first storage controller may host a first volume. A consistency group, comprising a subset of files, logical unit numbers, and/or other data of the first volume, is defined through a consistency group configuration. A baseline transfer, using a baseline snapshot of the first volume, is used to create a replicated consistency group within a second volume hosted by a second storage controller. In this way, an arbitrary level of granularity is used to synchronize/replicate a subset of the first volume to the second volume. If a synchronous replication relationship is specified, then one or more incremental transfer are performed and a synchronous replication engine is implemented. If an asynchronous replication relationship is specified, then snapshots are used to identify delta data of the consistency group for updating the replication consistency group.

Data storage system employing a hot spare to store and service accesses to data having lower associated wear

A controller monitors access frequencies of address ranges mapped to a data storage array. Based on the monitoring, the controller identifies frequently accessed ones of the address ranges that have lower associated wear, for example, those that are read more often than written. In response to the identifying, the controller initiates copying of a dataset associated with the identified address ranges from the data storage array to a spare storage device while refraining from copying other data from the data storage array onto the spare storage device. The controller directs read input/output operations (IOPs) targeting the identified address ranges to be serviced by access to the spare storage device. In response to a failure of a failed storage device among the plurality of primary storage devices, the controller rebuilds contents of the failed storage device on the spare storage device in place of the dataset associated with the identified address ranges.

DISTRIBUTED STORAGE AND REPLICATION SYSTEM AND METHOD

A distributed storage and replication system includes a MDC module, multiple IO routing modules, and multiple OSD nodes. The MDC module is adapted to configure at least two partition, the IO routing module is adapted to route an IO request to an OSD node, and the OSD node is adapted to execute storage of data corresponding to the IO request. The MDC is configured to determine a faulty OSD node, update a partition view of a partition group that includes a partition on the faulty OSD node, and send an updating notification to a primary OSD node in the updated partition view. The primary OSD node is adapted to process replication of the data corresponding to the IO request. According to embodiments of the present disclosure, processing performance, fault tolerance, and availability of consistency replication are improved.

Marking local regions and providing a snapshot thereof for asynchronous mirroring

Methods, apparatus and computer program products implement embodiments of the present invention that include conveying first data from local regions of a local volume of a local storage system to a remote storage system having a remote volume with remote regions in a one-to-one correspondence with the local regions. While conveying the first data, a request is received to update a given local region, and the given local region is marked.

Storage controller, storage system, and non-transitory computer-readable storage medium having stored therein control program
09720621 · 2017-08-01 · ·

A storage controller performing a copy process in which data stored in a copy source storage area is copied to a copy destination storage area, the storage controller includes a processor, wherein the processor receives a transfer command for giving an instruction for transferring data stored in a first area of the copy source storage area to a second area of the copy source storage area, starts a transfer process in which transfer data from the first area is read and is written into the second area in accordance with reception of the transfer command, and starts copying the transfer data into a corresponding area of the copy destination storage area that corresponds to the second area in the copy process together with the starting of the transfer process.

Resolving disruptions between storage systems replicating a dataset

Mediating between storage systems synchronously replicating a dataset, including: requesting, by a first storage system in response to detecting a triggering event, a lock for a shared resource from a mediation service; requesting, by a second storage system in response to detecting the triggering event, the lock for the shared resource from the mediation service; and responsive to acquiring the lock from the mediation service, the first storage system, instead of the second storage system, processing data storage requests directed to the dataset that is synchronously replicated across the first storage system and the second storage system.

Failover and recovery for replicated data instances

Replicated instances in a database environment provide for automatic failover and recovery. A monitoring component can periodically communicate with a primary and a secondary replica for an instance, with each capable of residing in a separate data zone or geographic location to provide a level of reliability and availability. A database running on the primary instance can have information synchronously replicated to the secondary replica at a block level, such that the primary and secondary replicas are in sync. In the event that the monitoring component is not able to communicate with one of the replicas, the monitoring component can attempt to determine whether those replicas can communicate with each other, as well as whether the replicas have the same data generation version. Depending on the state information, the monitoring component can automatically perform a recovery operation, such as to failover to the secondary replica or perform secondary replica recovery.

Mirroring multiple writeable storage arrays

Systems, methods, and computer program products for mirroring dual writeable storage arrays are provided. Various embodiments provide configurations including two or more mirrored storage arrays that are each capable of being written to by different hosts. When commands to write data to corresponding mirrored data blocks within the respective storage arrays are received from different hosts at substantially the same time, write priority for writing data to the mirrored data blocks is given to one of the storage arrays based on a predetermined criterion or multiple predetermined criteria.

Replicating data across deployments in a routing constrained environment

Disclosed herein are systems and methods for replicating data across deployments in a routing constrained environment. To replicate data, a processor may detect a modification that changes data for a source entity within a source environment hosting a source deployment of an application. The processor may then update a target environment hosting a target deployment of the application to mirror the modification within the source environment. To update the target environment, the processor may generate a mapping artifact that identifies the source entity having changed data and the target entity within the target environment receiving the changed data. The processor may then create a mapping infrastructure including one or more compute instances that replicate the changed data for the source entity in the target entity. To replicate data, the one or more compute instances may execute a mapping script that replicates the changed data from the source entity in the target entity by copying changed data from the source environment and writing it to a database in the target environment.