Patent classifications
G06F11/2066
Bandwidth management in a data storage system
In one embodiment, upon initiation of a consistency group, bandwidth reduction scanning logic determines whether a volume portion such as a track containing data which is to be mirrored from a primary volume to a secondary volume to form a consistency group, is allocated to the primary volume. If not, the bandwidth reduction scanning logic causes the data of the associated volume portion to not be mirrored from the primary volume to the secondary volume. As a result, the volume portion determined to not be allocated to the primary volume is bypassed by the mirroring operation, thereby reducing bandwidth usage by the mirroring and accelerating the formation of a consistency group. Other features and aspects may be realized, depending upon the particular application.
DATA STORAGE MANAGEMENT SYSTEM FOR PROTECTING CLOUD-BASED DATA INCLUDING ON-DEMAND PROTECTION, RECOVERY, AND MIGRATION OF DATABASES-AS-A-SERVICE AND/OR SERVERLESS DATABASE MANAGEMENT SYSTEMS
A streamlined approach enables customers to retain management control over their data in a database-as-a-service (DBaaS) setting, by providing managed backup copies outside cloud service providers' sphere of control. An illustrative data storage management system provides control over performing backup operations to generate managed backup copies, storing managed backup copies, recovering managed backup copies in whole or in part, migrating managed backup copies, and migrating DBaaS instances. Management control also extends to choices of where to store the managed backup copies, whether on the same cloud computing platform as the source DBaaS, on a different cloud computing platform, and/or in a non-cloud data center.
DATA STORAGE MANAGEMENT SYSTEM FOR MULTI-CLOUD PROTECTION, RECOVERY, AND MIGRATION OF DATABASES-AS-A-SERVICE AND/OR SERVERLESS DATABASE MANAGEMENT SYSTEMS
A streamlined approach enables customers to retain management control over their data in a database-as-a-service (DBaaS) setting, by providing managed backup copies outside cloud service providers' sphere of control. An illustrative data storage management system provides control over performing backup operations to generate managed backup copies, storing managed backup copies, recovering managed backup copies in whole or in part, migrating managed backup copies, and migrating DBaaS instances. Management control also extends to choices of where to store the managed backup copies, whether on the same cloud computing platform as the source DBaaS, on a different cloud computing platform, and/or in a non-cloud data center.
Remote Data Replication Method and System
A remote data replication method and a storage system, where a production array sends a data replication request to a disaster recovery array. The data replication request includes an identifier of a source object and a data block corresponding to the source object. The data block is stored in physical space of a hard disk of the production array. The disaster recovery array receives the data replication request. The disaster recovery array creates a target object when the disaster recovery array does not include an object having a same identifier as the source object. An identifier of the target object is the same as the identifier of the source object, the disaster recovery array writes the data block into the physical space.
Remote Data Replication Method and System
A remote data replication method and a storage system, where a production array sends a data replication request to a disaster recovery array. The data replication request includes an identifier of a source object and a data block corresponding to the source object. The data block is stored in physical space of a hard disk of the production array. The disaster recovery array receives the data replication request. The disaster recovery array creates a target object when the disaster recovery array does not include an object having a same identifier as the source object. An identifier of the target object is the same as the identifier of the source object, the disaster recovery array writes the data block into the physical space.
AUTOMATIC OBJECTIVE-BASED COMPRESSION LEVEL CHANGE FOR INDIVIDUAL CLUSTERS
A method, computer system, and a computer program product for objective-based compression level change is provided. The present invention may include storing a volume in a storage device, wherein the stored volume is compressed using an initial compression level. The present invention may also include checking a last access time of the stored volume in the storage device at a regular interval. The present invention may further include, in response to determining, based on the checked last access time, that the stored volume is not accessed at the regular interval, recompressing the stored volume in the storage device using a higher compression level, wherein the higher compression level includes a higher compression ratio than a compression ratio associated with the initial compression level.
Synchronized safe data commit scans in multiple data storage systems
In one aspect of the present description, safe data commit scan operations of individual data storage systems of a distributed data storage system may be synchronized to reduce the occurrence of reductions in input/output (I/O) response times. In one embodiment, a set of safe data commit scan operations of the individual data storage systems of a distributed data storage system are synchronously timed to substantially overlap in time within a single synchronized safe data commit scan set interval to reduce or eliminate the occurrences of reductions in input/output response times outside the synchronized safe data commit scan set interval. Other features and aspects may be realized, depending upon the particular application.
Accelerated deduplication block replication
Embodiments for, in a shared storage environment, managing data replication between first and second sites of a distributed computing environment by one or more processors. Based on an identified data block-set for replication, a unique metadata map is generated as a computed snapshot of the identified data block-set, the metadata map accounting for a predetermined block-size for transfer. The unique metadata map is transferred to the second site. The second site adds the unique metadata map to a global metadata repository.
Remote data replication method and system
A remote data replication method and a storage system, where a production array sends a data replication request to a disaster recovery array. The data replication request includes an identifier of a source object and a data block corresponding to the source object. The data block is stored in physical space of a hard disk of the production array. The disaster recovery array receives the data replication request. The disaster recovery array creates a target object when the disaster recovery array does not include an object having a same identifier as the source object. An identifier of the target object is the same as the identifier of the source object, the disaster recovery array writes the data block into the physical space. This may reduce bandwidth load between the production array and the disaster recovery array.
Storage system with multiple write journals supporting synchronous replication failure recovery
A storage system in one embodiment is configured to participate as a source storage system in a synchronous replication process with a target storage system. In conjunction with the synchronous replication process, the source storage system receives write requests from at least one host device. Responsive to a given write request being a multi-page write request, an entry is created in a first journal, where the first journal is utilized to ensure that the given write request is completed for all of the pages or for none of the pages. Responsive to the write request being a single-page write request, an entry is created in a second journal different than the first journal. An address-to-signature table is updated utilizing write data of the write request, and if the corresponding entry for the write request was created in the first journal, the entry is swapped from the first journal into the second journal, and the write data of the write request is sent to the target storage system.