Patent classifications
G06F11/1662
Data storage system employing a hot spare to store and service accesses to data having lower associated wear
A controller monitors access frequencies of address ranges mapped to a data storage array. Based on the monitoring, the controller identifies frequently accessed ones of the address ranges that have lower associated wear, for example, those that are read more often than written. In response to the identifying, the controller initiates copying of a dataset associated with the identified address ranges from the data storage array to a spare storage device while refraining from copying other data from the data storage array onto the spare storage device. The controller directs read input/output operations (IOPs) targeting the identified address ranges to be serviced by access to the spare storage device. In response to a failure of a failed storage device among the plurality of primary storage devices, the controller rebuilds contents of the failed storage device on the spare storage device in place of the dataset associated with the identified address ranges.
MONITORING DEVICE, FAULT-TOLERANT SYSTEM, AND CONTROL METHOD
A monitoring device is mounted in each of a plurality of operational systems constituting a fault-tolerant system. The plurality of operational systems have an identical configuration including a processor system. The monitoring device includes a processor. The processor executes instruction to read data from a predetermined storage area in a memory of an accessory device to be monitored, connected to the processor system. The processor further executes instruction to compare the read data with reference data held in advance. The processor further executes instruction to separate the processor system connected to the accessory device to be monitored from the fault-tolerant system when the read data is different from the reference data.
ENABLING DATA INTEGRITY CHECKING AND FASTER APPLICATION RECOVERY IN SYNCHRONOUS REPLICATED DATASETS
One or more techniques and/or computing devices are provided for utilizing snapshots for data integrity validation and/or faster application recovery. For example, a first storage controller, hosting first storage, has a synchronous replication relationship with a second storage controller hosting second storage. A snapshot replication policy rule is defined to specify that a replication label is to be used for snapshot create requests, targeting the first storage, that are to be replicated to the second storage. A snapshot creation policy is created to issue snapshot create requests comprising the replication label. Thus a snapshot of the first storage and a replication snapshot of the second storage are created based upon a snapshot create request comprising the replication label. The snapshot and the replication snapshot may be compared for data integrity validation (e.g., determine whether the snapshots comprise the same data) and/or quickly recovering an application after a disaster.
Replication of xcopy command
A method, system and program product for implementing the xcopy command in a replication environment, the replication environment having a production site, a splitter, and a replication site, wherein the replication site has a journal, comprising, determining if the source and target LUNs of the xcopy command are replicated, based on a determination that both source and target LUNs are replicated, determining if the production and replication LUNs are synchronized, based on a positive determination that the LUNs are synchronized, and performing the xcopy command on the replication.
Operating system-based systems and method of achieving fault tolerance
A method and apparatus of performing fault tolerance in a fault tolerant computer system comprising: a primary node having a primary node processor; a secondary node having a secondary node processor, each node further comprising a respective memory; a respective checkpoint shim; each of the primary and secondary node further comprising: a respective non-virtual operating system (OS), the non-virtual OS comprising a respective; network driver; storage driver; and checkpoint engine; the method comprising the steps of: acting upon a request from a client by the respective OS of the primary and the secondary node, comparing the result obtained by the OS of the primary node and the secondary node by the network driver of the primary node for similarity, and if the comparison of indicates similarity less than a predetermined amount, the primary node network driver informs the primary node checkpoint engine to begin a checkpoint process.
FAILOVER OF A DATABASE IN A HIGH-AVAILABILITY CLUSTER
As disclosed herein a computer-implemented method for managing an HA cluster includes activating, by a cluster manager, a monitoring process that monitors a database on a first node in a high-availability database cluster. The method further includes receiving an indication that the database on the first node is not healthy, initiating a failover operation for deactivating the database on the first node and activating a standby database on a second node in the high-availability database cluster providing an activated standby database, and ensuring that any additional databases on the first node are unaffected by the failover operation. A computer program product corresponding to the above method is also disclosed.
Method and system for accelerating data movement using change information concerning difference between current and previous data movements
According to one embodiment, a first storage system receives a first data stream from a second storage system over a network. The first data stream includes data objects and differential object information identifying at least one data object missing from the first data stream. A difference between the first data stream and a second data stream that has been previously received is determined based on the differential object information, including identifying a data object that has been added, deleted, or modified in view of the second data stream. The first data stream is reconstructed based on the second data stream and the difference between the first data stream and the second data stream, generating a third data stream. The third data stream is stored in a persistent storage device of the first storage system, the third data stream representing a complete first data stream without a missing data object.
Resiliency director
Systems and methods of orchestrating recoveries of virtual machines protected by a data management systems from a primary system to a secondary system, such that performing the recoveries depends on relationships between the virtual machines. First data indicative of a recovery plan associated with a failover of at least one group of virtual machines is received. The recovery plan includes an application group with data indicative of a hierarchical relationship between the virtual machines wherein each of the virtual machines is associated with an order based on the second data. A plurality of sequences is created in the application group to designate an order of executing a plurality of recoveries for each of the virtual machines. A first recovery is executed in parallel for each of the virtual machines associated with a first sequence and a subsequent recovery is executed in parallel for each of a subsequent set of sequences.
Failover methods and system in a networked storage environment
Failover methods and systems for a storage environment are provided. During a takeover operation to take over storage of a first storage system node by a second storage system node, the second storage system node copies information from a first storage location to a second storage location. The first storage location points to an active file system of the first storage system node, and the second storage location is assigned to the second storage system node for the takeover operation. The second storage system node quarantines storage space likely to be used by the first storage system node for a write operation, while the second storage system node attempts to take over the storage of the first storage system node. The second storage system node utilizes information stored at the second storage location during the takeover operation to give back control of the storage to the first storage system node.
Electronic device and firmware recovery program that ensure recovery of firmware
An electronic device includes a first nonvolatile memory, a second nonvolatile memory, and a control circuit. The first nonvolatile memory includes an area to store firmware. The firmware includes a first kernel. The second nonvolatile memory includes an area to store an update program, the update program including a second kernel. The control circuit boots the one of the first and the second kernels, and ensures writing data to the first nonvolatile memory by the booted one of the first and the second kernels. When the firmware is incapable of being read, the control circuit reads the update program and performs the boot process to boot the second kernel, and writes updating data of the firmware to the first nonvolatile memory, the first nonvolatile memory being writable of the data by the booted second kernel.