Patent classifications
G06F11/1466
Classifying snapshot image processing
Systems, methods, and machine-storage medium for classifying snapshot image processing are described. The system receives read requests to read snapshot information. Each read request includes an offset identifying a storage location and a length. The snapshot information includes snapshots including a full snapshot and at least one incremental snapshot. The read requests include a first read request to read data from the snapshot information. The system generates a first plurality of read events including a second plurality of read events that are generated by processing the first read request. The second plurality of read events includes first and a second read events. The system identifies whether utilizing a cache optimizes the job based on the first plurality of read events.
Backup, restoration, and migration of computer systems
Methods, systems, and apparatus, including computer programs encoded on computer-storage media, for backup, restoration, and migration of computer systems. In some implementations, data from a first server environment is obtained. A data package is generated that includes configuration data, data objects, and/or metadata from the first server environment organized in a predetermined arrangement. Data indicating (i) a destination on which to deploy the archived data from the first server environment and (ii) one or more characteristics of the destination is received. Mapping data that specifies a mapping of elements in the predetermined arrangement to elements of server environments having the one or more characteristics is accessed. Server environment data derived from the data package is deployed, the server environment data being deployed to the destination and arranged at the destination in a manner specified by the mapping data.
Dynamic optimization of backup policy
Embodiments of the present disclosure provide a method of backup management, an electronic device and a computer program product. The method comprises: determining a plurality of candidate backup policies for a plurality of clients of a data backup system, determining an expected load balance degree with respect to time for the data backup system to perform data backups for the plurality of clients using the plurality of candidate backup policies, determining an actual load balance degree with respect to time for the data backup system while the data backup system is performing the data backups for the plurality of clients using a plurality of current backup policies, and selecting a plurality of backup policies to be used for the plurality of clients respectively, based on a comparison of the expected load balance degree and the actual load balance degree.
System and method for prioritizing backup generation
A production host includes storage for storing backup priorities of entities and backup windows during which a system, of which the production host is a member, is predicted to have sufficient computing resources to generate a backup for an entity of the entities and a backup manager that identifies a backup generation event for the entity; in response to identifying the backup generation event: identifying an earliest potential backup window of the backup windows; making a determination that the earliest potential backup window of the backup windows is reserved for a second entity of the entities; in response to making the determination: identifying that a backup priority of the backup priorities that is associated with the entity is greater than a second backup priority of the backup priorities that is associated with the second entity; and providing the backup services to the entity during the earliest potential backup window.
Dynamic management of expandable cache storage for multiple network shares configured in a file server
Expandable cache management dynamically manages cache storage for multiple network shares configured in a file server. Once a file is written to a directory or folder on a specially designated network share, such as one that is configured for “infinite backup,” an intermediary pre-backup copy of the file is created in an expandable cache in the file server that hosts the network share. On write operations, cache storage space can be dynamically expanded or freed up by pruning previously backed up data. This advantageously creates flexible storage caches in the file server for each network share, each cache managed independently of other like caches for other network shares on the same file server. On read operations, intermediary file storage in the expandable cache gives client computing devices speedy access to data targeted for backup, which is generally quicker than restoring files from backed up secondary copies.
HETEROGENEOUS INDEXING AND LOAD BALANCING OF BACKUP AND INDEXING RESOURCES
Indexing preferences generally associate each data source with a type of indexing technology and/or with an index/catalog and/or with a computing device that hosts the index/catalog for tracking backup data generated from the source data. Indexing preferences govern which index/catalog receives transaction logs for a given storage operation. Thus, indexing destinations are defined granularly and flexibly in reference to the source data. Load balancing without user intervention assures that the various index/catalogs are fairly distributed in the illustrative backup systems by autonomously initiating migration jobs. Criteria for initiating migration jobs are based on past usage and going-forward trends. An illustrative migration job re-associates data sources with a different destination media agent and/or index/catalog, including transferring some or all relevant transaction logs and/or indexing information from the old host to the new host.
SERVER GROUP FETCH IN DATABASE BACKUP
In some examples, a method of performing a backup of a group of relational databases comprises identifying database files to be fetched in the group of relational databases; grouping the identified database files into batches; based on configuration parameters of the identified database files, identifying, among the batches, a sub-set of batches of database files that are eligible to be fetched in parallel for the backup; configuring a single fetch call to a call stack to fetch the sub-set of eligible batches; and determining a push or pull model for the configured single fetch call based at least in part on feedback from a most resource-constrained element in the call stack.
Dynamic application consistent data restoration
Restore operations in containerized environments are disclosed. An ephemeral instance of an application is created and a datastore is mounted to the ephemeral instance. The ephemeral instance is not accessible to users or application. The backup data is restored to the datastore. Once restored, the datastore is then mounted to a production instance and production resumes.
GLOBALLY UNIQUE WAY TO IDENTIFY A RESOURCE
A method for providing data protection services to service devices that provide computer implemented services for clients and host resources used to provide the computer implemented services to the clients includes obtaining a resource discovery request for a service device of the service devices. The method further includes, in response to obtaining the resource discovery request: identifying a resource of a portion of the resources hosted by the service device; obtaining: a system identifier for the resource, and a natural identifier for the resource; making a determination that the natural identifier matches a second natural identifier associated with a known resource of the known resources; and in response to the determination: updating a record associated with the known resource based on one or more conditions of the resource.
Data consistency check in distributed system
Scanning and rescanning detect state inconsistencies between data entities in repositories or other components of a distributed computing environment. First, entities are scanned based on a cutoff time TO. Entities for which state comparison is undesired are placed in a skipped entity list. Any inconsistencies found in other entities is reported. Then subsequent rescanning fetches state and attempts to pare down the skipped entity list. Rescanning may be capped. Inconsistencies may be detected without requiring downtime from services that update data entity state, and false reports of inconsistency may be avoided.