Patent classifications
G06F11/2071
Method and system for function-specific time-configurable replication of data manipulating functions
The system (10) and method (100) of the invention provides for function-specific replication of data manipulating functions (12) performed on data, such as files or objects, with a configurable time delay (14) for each function to be replicated. The system (10) and method (100) includes a replication management module (40) for managing the consistent function specific replication of data manipulating functions (12) with a function-specific delay (14) between a source storage system(s) (20, 65) and a destination storage system(s) (30, 75) and optionally includes a replication monitoring database (42).
Reducing failover time between data nodes
A storage node that maintains a replica of a logical volume for use in response to a failover trigger includes a data node with volatile memory in which a filesystem and its metadata and a VDM and its metadata associated with the replica are maintained prior to the failover trigger. The storage node also includes a SAN node in which data associated with the replica is maintained. The data is maintained in a RW (read-write) state by the SAN node prior to the failover trigger. However, the replica is presented in a RO (read-only) state by the storage node prior to the failover trigger. The storage node changes the in-memory state of the filesystem and VDM to RW responsive to the failover trigger. Because the filesystem and its metadata and VDM and its metadata are already in memory and the data is in a RW state in block storage the failover is completed relatively quickly.
PROVIDING EXECUTING PROGRAMS WITH ACCESS TO STORED BLOCK DATA OF OTHERS
Techniques are described for managing access of executing programs to non-local block data storage. In some situations, a block data storage service uses multiple server storage systems to reliably store copies of network-accessible block data storage volumes that may be used by programs executing on other physical computing systems, and snapshot copies of some volumes may also be stored (e.g., on remote archival storage systems). A group of multiple server block data storage systems that store block data volumes may in some situations be co-located at a data center, and programs that use volumes stored there may execute on other computing systems at that data center, while the archival storage systems may be located outside the data center. The snapshot copies of volumes may be used in various ways, including to allow users to obtain their own copies of other users' volumes (e.g., for a fee).
REESTABLISHING REDUNDANCY IN REDUNDANT STORAGE
Storage redundancy may be resynchronized without determining a snapshot difference. A storage component (210) owning a volume (122) can maintain current and expected generation numbers (212, 214) based on modification requests received and modification requests that a backup component (220) acknowledges completing. The backup (220) can maintain current and expected generation numbers (222, 224) based on modification requests received and applied to a backup volume (124). If either component (210, 220) fails and later returns to service, differences between the owner's current and expected generation numbers (212, 214) and the backup's current and expected generation numbers (222, 224) indicate which modification requests may have been missed and need to be reconstructed to restore synchronization.
STORAGE SYSTEM
A storage system includes N horizontal backplanes and a first mirror backplane. Each horizontal backplane includes a first controller and a second controller on a same plane. The N first controllers and the N second controllers of the storage system form a first column and a second column in a vertical direction. The first mirror backplane is perpendicular to the horizontal backplanes, a first side of the first mirror backplane is connected to the horizontal backplanes, and a second side is connected to the controllers. A second side of the first controller has N second mirror ports and N second heartbeat ports, and a first side of the second controller has N first mirror ports and N first heartbeat ports. Wiring on the first mirror backplane includes wiring that interconnects the first mirror port of the second controller to the second mirror port of the first controller.
SYSTEM CONTROL PROCESSOR DATA MIRRORING SYSTEM
An SCP data mirroring system includes a chassis housing a central processing system and an SCP subsystem. The SCP subsystem includes an SCP memory system with different priority storage queues each storing a copy of data provided by the central processing system, along with an SCP communication system and an SCP data storage subsystem. During a first time period, the SCP data storage subsystem retrieves a first copy of the data from a first storage queue in the SCP memory system and transmits it via the SCP communication system and through a network for storage on first storage device(s). During a subsequent second time period, the SCP data storage system retrieves a second copy of the data from a lower priority second storage queue in the SCP memory system and transmits it via the SCP communication system and through the network for storage on second storage device(s).
Identifying workload and sizing of buffers for the purpose of volume replication
A controller is operable to: identify virtual machines to be protected in a first storage system; identify logical volumes used by the virtual machines based on first relationship information; calculate workload, based on information of workload monitored for the identified logical volumes; and calculate size of a buffer area in the first storage system to be used for temporarily storing copy data to be sent to a second storage system in remote copy procedure of one or more remote copy pairs, based on the calculated workload, each copy pair being formed by a logical volume of the identified logical volumes in the first storage system as primary logical volume and another logical volume in the second storage system as secondary logical volume, so that the buffer area having a size equal to or greater than the calculated size can be used to manage protection of the identified virtual machines.
Intelligent data placement
A method of mapping a volume of storage to a plurality of pools of storage devices specified by a host having a host identification. The volume of data storage has a volume identification and a plurality of extents. The method includes assigning a first pool of storage devices to the volume of storage based on the host identification, and determining a mapping value based on the host identification and the volume identification for the first pool of storage devices. The method also includes determining a storage device index based on the mapping value and one or more extents in the plurality of extents, and mapping a portion of the extents to the first pool of storage devices based on the storage device index.
Storage apparatus and storage apparatus migration method
A source remote copy configuration in a source storage system is migrated to a destination storage system as a destination remote copy configuration. The destination primary storage apparatus of the destination storage system defines a virtual volume mapped to the primary volume provided by the source primary storage apparatus which is a storage area of the virtual volume; takes over a first identifier of the primary volume to the virtual volume; transfers, when the virtual volume receives an access request, the access request to the source primary storage apparatus to write data in the primary volume; and takes over the first identifier from the virtual volume to another primary volume provided by the destination primary storage apparatus, after completion of copy of data from primary volume of the source primary storage apparatus into primary volume of the destination primary storage apparatus and secondary volume of the destination secondary storage apparatus.
Migrating data objects together with their snaps
A technique for performing non-disruptive migration coordinates object migration with snapshot-shipping to migrate both a data object and its snaps from a source to a target. Snapshot-shipping conveys snaps to the target, and an internal snap of the data object serves as a basis for building a migrated version of the data object at the target. As IO requests specifying writes to the data object arrive at the source, a data mirroring operation writes the arriving data both to the data object at the source and to the version thereof at the target. In parallel with the data mirroring operation, a filtering copy operation copies data of the internal snap to the target, but avoids overwriting data mirrored to the target after the internal snap is taken.