Patent classifications
G06F11/2058
Self-healing virtualized file server
In one embodiment, a system for managing a virtualization environment comprises a plurality of host machines, one or more virtual disks comprising a plurality of storage devices, a virtualized file server (VFS) comprising a plurality of file server virtual machines (FSVMs), wherein each of the FSVMs is running on one of the host machines and conducts I/O transactions with the one or more virtual disks, and a virtualized file server self-healing system configured to identify one or more corrupt units of stored data at one or more levels of a storage hierarchy associated with the storage devices, wherein the levels comprise one or more of file level, filesystem level, and storage level, and when data corruption is detected, cause each FSVM on which at least a portion of the unit of stored data is located to recover the unit of stored data.
Multi-threaded transaction log for primary and restore/intelligence
A unified system provides primary storage and in-line analytics-based data protection. Additional data intelligence and analytics gathered on protected data and prior analytics are stored in discovery points. The disclosed system implements multi-threaded log writes across primary and restore nodes with write gathering across file systems; nested directories such as may be used for storing virtual machine files, where every subdirectory has an associated file system for snapshot purposes; and cloning objects on demand with background metadata and data migration.
REPLICATION BETWEEN HETEROGENEOUS STORAGE SYSTEMS
Disclosed herein are systems, methods, and processes to perform replication between heterogeneous storage systems. Information associated with a backup stream is recorded during a backup operation by a source server and includes instructions. The instructions include an include instruction to include existing data and a write instruction to write new data during a replication operation. A request to perform the replication operation is received. In response to the request, the information is sent to a target server as part of performing the replication operation.
Mirroring device, control method thereof, and storage medium that maintain difference in remaining writable amounts of data
A mirroring device that can improve, even when two storage devices to which an upper limit is set for the number of rewrites of data are used, the fault tolerance of the mirroring device while preventing one of the storage devices from reaching the lifetime thereof early. A mirroring device comprises two storage devices to which an upper limit is set for the number of rewrites of data. Remaining writable amounts of the data in the storage devices are acquired respectively from total amounts of the data written in the respective storage devices. When it is determined that a difference between the respective acquired remaining writable amounts is less than a predetermined value, the respective storage devices are controlled such that the difference becomes equal to or more than the predetermined value.
GRANULAR CONSISTENCY GROUP REPLICATION
One or more techniques and/or computing devices are provided for granular replication for data protection. For example, a first storage controller may host a first volume. A consistency group, comprising a subset of files, logical unit numbers, and/or other data of the first volume, is defined through a consistency group configuration. A baseline transfer, using a baseline snapshot of the first volume, is used to create a replicated consistency group within a second volume hosted by a second storage controller. In this way, an arbitrary level of granularity is used to synchronize/replicate a subset of the first volume to the second volume. If a synchronous replication relationship is specified, then one or more incremental transfer are performed and a synchronous replication engine is implemented. If an asynchronous replication relationship is specified, then snapshots are used to identify delta data of the consistency group for updating the replication consistency group.
DAISY-CHAIN STORAGE SYNCHRONIZATION SYSTEM AND METHOD
A daisy-chain storage synchronization (DSS) system and method that permits a daisy-chain of interconnected pass-thru disk drive controllers (PTDDCs) each connected to a SATA local disk drive (LDD) disk storage element (DSE) to support state synchronization within PTDDCs in the daisy-chain is disclosed. The PTDDCs within the daisy-chain are configured to individually maintain drive state information (DSI) relating to the LDD as well as chain state information (CSI) relating to the individual PTDDCs within the daisy-chain. This state information may be modified on receipt of out-of-band signaling (OBS) from other PTDDC elements up the daisy-chain as well as OBS from other PTDDC elements down the daisy-chain, CSI is determined in part by conventional SATA OBS state register protocols that are modified by internal state registers (ISR) in each individual PTDDC daisy-chain element so as to make the DSS transparent to existing SATA OBS single-disk standard hardware command protocols.
Selective access to executable memory
In an embodiment, a data processing method comprises: in a computer executing a supervisor program, the supervisor program establishing different memory access permissions comprising any combination of read, write, and execute permissions for one or more different regions of memory of a first domain, receiving a request from a process to execute a particular memory page of the regions of memory, the particular memory page comprising a memory access permission set to read-writeable or read-only, throwing an execute fault for the particular memory page, performing one or more responsive actions to restore execution access or content of the particular memory page, and after performing the one or more responsive actions, setting the memory access permission to execute only.
Data storage with virtual appliances
A data storage system has at least two universal nodes each having CPU resources, memory resources, network interface resources, and a storage virtualizer. A system controller communicates with all of the nodes. Each storage virtualizer in each universal node is allocated by the system controller a number of storage provider resources that it manages. The system controller maintains a map for dependency of virtual appliances to storage providers, and the storage virtualizer provides storage to its dependent virtual appliances either locally or through a network protocol (N_IOC, S_IOC) to another universal node. The storage virtualizer manages storage providers and is tolerant to fault conditions. The storage virtualizer can migrate from any one universal node to any other universal node.
Reporting using data obtained during backup of primary storage
A data storage system can scan one or more information stores of primary storage and analyze the metadata of files stored in the one or more information stores of primary storage to identify multiple, possibly relevant, secondary copy operations that can be performed on the files. The storage system can also identify primary storage usage information of each file during the scan and use that information to generate reports regarding the usage of the primary storage.
Layered keys for storage volumes
Techniques are described for managing data storage. Users may create data storage volumes that may each be stored by a data storage service. In an embodiment, chunks that differ between related volumes may be encrypted with different encryption keys. One or more of the encryption keys may be deleted in response to a request to delete a volume or a data chunk, rendering the volume and/or the data chunk unusable. Other techniques are described in the drawings, claims, and text of the disclosure.