Patent classifications
G06F11/2038
SYSTEMS AND METHODS FOR CONTINUOUS DATA PROTECTION COMPRISING STORAGE OF COMPLETED I/O REQUESTS INTERCEPTED FROM AN I/O STREAM USING TOUCH POINTS
Example embodiments relate generally to systems and methods for continuous data protection (CDP) and more specifically to an input and output (I/O) filtering framework and log management system to seek a near-zero recovery point objective (RPO).
SYSTEM AND METHOD FOR STORAGE AWARENESS SERVICE FAILOVER
A method, computer program product, and computing system for determining whether a storage awareness service provider node of a storage system has failed. In response to determining that the storage awareness service provider node has failed, an intermediate storage awareness service may be deployed within the storage system. At least one request may be processed on the storage system via the intermediate storage awareness service.
Enhancing file indexing of block-level backup copies of virtual machines and/or file systems by populating and tracking a cache storage area and a backup index
An illustrative approach accelerates file indexing operations for block-level backup copies in a data storage management system. A cache storage area is maintained for locally storing and serving key data blocks, thus relying less on retrieving data on demand from the backup copy. File indexing operations are used for populating the cache storage area for speedier retrieval during subsequent live browsing of the same backup copy, and vice versa. The key data blocks cached while file indexing and/or live browsing an earlier backup copy help to pre-fetch corresponding data blocks of later backup copies, thus producing a beneficial learning cycle. The approach is especially beneficial for cloud and tape backup media, and is available for a variety of data sources and backup copies, including block-level backup copies of virtual machines (VMs) and block-level backup copies of file systems, including UNIX-based and Windows-based operating systems and corresponding file systems.
Techniques for deploying workloads on nodes in a cloud-computing environment
Described are examples for deploying workloads in a cloud-computing environment. In an aspect, based on a desired number of workloads of a process to be executed in a cloud-computing environment and based on one or more failure probabilities, an actual number of workloads of the process to execute in the cloud-computing environment to provide a level of service can be determined and deployed. In another aspect, a standby workload can be executed as a second instance of the process without at least a portion of the separate configuration used by the multiple workloads, and based on detecting termination of one of multiple workloads, the standby workload can be configured to execute based on the separate configuration of the separate instance of the process corresponding to the one of the multiple workloads.
FAULT TOLERANT SYSTEM, SERVER, AND OPERATION METHOD OF FAULT TOLERANT SYSTEM
A first server and a second server use a virtual address to mount the storage synchronous area in a storage by the NFS. The first server obtains a snapshot of memory content of a virtual system operated as an active system and transmits the snapshot to the second server. The first server replicates content of the storage synchronous area in the storage to a storage synchronous area in a storage. When a failure occurs in the first server, the second server sets a virtual address to the storage and uses the virtual address to mount the storage synchronous area in the storage by NFS. The second server uses the snapshot received from the first server to execute the application on the virtual system.
Application backup and management
A data management and storage (DMS) cluster of peer DMS nodes manages data of an application distributed across a set of machines of a compute infrastructure. A DMS node associates a set of machines with the application, and generates data fetch jobs for the set of machines for execution by multiple peer DMS nodes. The DMS node determining whether each of the data fetch jobs for the set of machines is ready for execution by the peer DMS nodes. In response to determining that each of the data fetch jobs is ready for execution, the peer DMS nodes execute the data fetch jobs to generate snapshots of the set of machines. The snapshots may be full or incremental snapshots, and collectively form a snapshot of the application.
Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations
Snapshot-based disaster recovery (DR) orchestration systems and methods for virtual machine (VM) failover and failback do not require that VMs or their corresponding datastores be actively operating at the DR site before a DR orchestration job is initiated, i.e., before failover. An illustrative data storage management system deploys proprietary components at source data center(s) and at DR site(s). The proprietary components (e.g., storage manager, data agents, media agents, backup nodes, etc.) interoperate with each other and with the source and DR components to ensure that VMs will successfully failover and/or failback. DR orchestration jobs are suitable for testing VM failover scenarios (“clone testing”), for conducting planned VM failovers, and for unplanned VM failovers. DR orchestration jobs also handle failback and integration of DR-generated data into the failback site, including restoring VMs that never failed over to fully re-populate the source/failback site.
Managing storage domains, service tiers and failed storage domain
System detects failed storage domain in servers cluster, controlled by master node to execute applications and store data, in service tiers, corresponding to server performance characteristics, in storage domains, corresponding to server racks, in cluster. System identifies, by accessing database, applications installed on servers in service tiers in failed storage domain and any affinities that identified applications have for server types, service tiers, and/or storage domains. System updates, based on current configuration of cluster, identified affinities for identified applications. System enables, by providing updated affinities for identified applications in cluster database, master node to identify first and second set of replacement servers, for identified applications, corresponding to server rack and first and second set of server performance characteristics, and install identified applications in first and second set of replacement servers, enabling first and second set of replacement servers to substitute for failed storage domain and store data.
Simple integration of an on-demand compute environment
Disclosed are a system and method of integrating an on-demand compute environment into a local compute environment. The method includes receiving a request from an administrator to integrate an on-demand compute environment into a local compute environment and, in response to the request, automatically integrating local compute environment information with on-demand compute environment information to make available resources from the on-demand compute environment to requesters of resources in the local compute environment such that policies of the local environment are maintained for workload that consumes on-demand compute resources.
VEHICULAR CONTROL SYSTEM
A vehicular control system includes a plurality of electronic control units (ECUs), each providing a respective quantity of computational units representative of an amount of processing power of the respective ECU. The ECUs operate a vehicle in a nominal autonomous operational mode when a sum of the quantity of computational units exceeds a threshold. The system, while the ECUs operate the vehicle in the nominal autonomous operational mode, and responsive to detecting a failure of one of the ECUs, determines whether a sum of the quantity of computational units of the remaining ECUs that do not have a failure exceeds the threshold. The ECUs, responsive to the system determining that the sum of the quantity of computational units of the remaining ECUs fails to exceed the threshold, switches from operating the vehicle in the nominal autonomous operational mode to operating the vehicle in a degraded autonomous operational mode.