Patent classifications
G06F11/2064
Failover methods and system in a networked storage environment
Failover methods and systems for a storage environment are provided. During a takeover operation to take over storage of a first storage system node by a second storage system node, the second storage system node copies information from a first storage location to a second storage location. The first storage location points to an active file system of the first storage system node, and the second storage location is assigned to the second storage system node for the takeover operation. The second storage system node quarantines storage space likely to be used by the first storage system node for a write operation, while the second storage system node attempts to take over the storage of the first storage system node. The second storage system node utilizes information stored at the second storage location during the takeover operation to give back control of the storage to the first storage system node.
Techniques for scalable storage without communication on the synchronous path
A system and method for scalable storage. The method includes placing a lock on a portion of a storage node, wherein placing the lock further comprises replacing a first value stored in the storage node with a second value using an atomic operation, wherein the atomic operation replaces the first value with the second value when the first value indicates an empty lock status, wherein the second value indicates an active lock status; allocating a storage location in the storage node by updating metadata stored in the locked portion of the storage node when the lock has been placed; and releasing the lock, wherein releasing the lock further comprises replacing the second value with a third value, wherein the third value indicates the empty lock status.
Synchronization storage solution after an offline event
Disclosed are systems and methods of synchronization between a source and a target. The synchronization relationship can be quickly and easily be created for disaster recovery, real-time backup and failover, thereby ensuring that data on the source is fully-protected at an off-site location or on another server or VM, for example, at another data center, a different building or elsewhere in the cloud. Common snapshots available on both the source and target can act as common recovery points. The common recovery points can be used to locate the most recent snapshot in common, between the source and target, to enable a delta sync of all subsequently written data at the source to the target after an offline event.
Synchronously replicating a dataset across a plurality of storage systems
Servicing I/O operations directed to a dataset that is synchronized across a plurality of storage systems, including: receiving, by a follower storage system, a request to modify the dataset; sending, from the follower storage system to a leader storage system, a logical description of the modification to the dataset; receiving, from the leader storage system, information describing the modification to the dataset; processing, by the follower storage system, the request to modify the dataset; receiving, from the leader storage system, an indication that the leader storage system has processed the request to modify the dataset; and acknowledging, by the follower storage system, completion of the request to modify the dataset.
SYSTEMS AND METHODS FOR IMPLEMENTING PERSISTENT DATA STRUCTURES ON AN ASYMMETRIC NON-VOLATILE MEMORY ARCHITECTURE
Systems and methods are provided for persisting a data structure. One method may comprise, at a front-end node in a computing system: generating a data structure operation record for a data structure operation directed to a data structure persisted in a non-volatile memory (NVM) in a back-end node of the computing system, appending the data structure operation record in an operation log, generating a transaction record for a transaction that includes a plurality of memory operations that collectively accomplishing the data structure operation, appending the transaction record in a transaction log, flushing the transaction log to the back-end node after flushing the operation log; and at the back-end node of the computing system: persisting received operation log and received transaction log in the NVM, and accomplishing the data structure operation by performing the plurality of the memory operation records with the data structure operation record as a commit signal.
Workload coordination on disaster recovery site
Described are techniques for utilization of a disaster recovery site including a method comprising receiving a mirrored data stream at a disaster recovery site from a production site. The mirrored data stream includes a workload instruction stored in a reserved record set type. The workload instruction indicates an operation to perform on a set of data that is replicated between the production site and the disaster recovery site and a time indicator indicating a correct version of the set of data. The method further comprises generating a consistency point by retrieving replicated data from the disaster recovery site corresponding to the correct version of the set of data in the production site. The method further comprises performing the operation on the consistency point, generating an output, and transmitting the output to the production site.
Method and apparatus for redundancy in active-active cluster system
A method is applied to a system including a host cluster and at least one pair of storage arrays. The host cluster includes a quorum host, which includes a quorum unit. The quorum host is an application host having a quorum function. A pair of storage arrays includes a first storage array and a second storage array. The quorum host receives a quorum request, temporarily stops delivering a service to the first storage array and the second storage array, determines, from the first storage array and the second storage array, which is a quorum winning storage array and which is a quorum losing storage array according to logic judgment, stops the service with the quorum losing storage array, sends quorum winning information to the quorum winning storage array, and resumes the delivered service between the host cluster and the quorum winning storage array.
Configuration inconsistency identification between storage virtual machines
One or more techniques and/or systems are provided for identifying configuration inconsistencies between storage virtual machines across storage clusters. For example, a first storage cluster and a second storage cluster may be configured according to a disaster recovery relationship where user data and configuration data of the first storage cluster are replicated to the second storage cluster so that the second storage cluster can takeover for the first storage cluster in the event a disaster occurs at the first storage cluster. Because replication of configuration data (e.g., a name and size of a volume, a backup policy, etc.) may fail for various reasons, configuration of the first storage cluster is compared to configuration of the second storage cluster to identify a configuration difference (e.g., a new size of the volume at the first storage cluster may have failed to be replicated to a replicated volume at the second storage cluster).
Connectivity-aware witness for active-active storage
Architectures and techniques are described that can enhance the functionality of a witness for an active-active storage array. In the event of a dual storage area network (SAN) failure, or another suitable event, host-array connectivity can take precedence for the witness in determining a winner or loser. Techniques are presented to identify connectivity issues and to utilize connectivity data in connection with determining a winner or a loser.
SYNCHRONOUS REPLICATION
One or more techniques and/or computing devices are provided for synchronous replication. For example, synchronous replication relationships are established between a first storage object (e.g., a file, a logical unit number (LUN), a consistency group, etc.), hosted by a first storage controller, and a plurality of replication storage objects hosted by other storage controllers. In this way, a write operation to the first storage object is implemented in parallel upon the first storage object and the replication storage objects in a synchronous manner, such as using a zero-copy operation to reduce overhead otherwise introduced by performing copy operations. Reconciliation is performed in response to a failure so that the first storage object and the replication storage objects comprise consistent data. Failed write operations and replication write operations are retried, while enforcing a single write semantic. Dependent write order consistency is enforced for dependent write operations, such as overlapping write operations.