Patent classifications
G06F11/2058
System and method for rapidly transferring and recovering large data sets
Systems for rapidly transferring and, as needed, recovering large data sets and methods for making and using the same. In various embodiments, the system advantageously can allow data to be transferred in larger sizes, wherein data may be easily recovered from multiple regions and wherein latency is no longer an issue, among other things.
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR PROCESSING DATA
Embodiments that process data are described. For instance, a method includes receiving, at a first disk management device in a storage system, an access request for accessing data in a plurality of disks associated with the storage system. The method further includes determining whether a first access engine for accessing the plurality of disks in the first disk management device is available. The method further includes redirecting the access request to a second disk management device in the storage system if it is determined that the first access engine is unavailable, wherein a second access engine in the second disk management device is available to access the plurality of disks. By means of this method, effective data access can be performed when an access engine of a disk management device is unavailable, thus realizing a more stable access capability and improving the user experience.
Transaction-based storage system and method that uses variable sized objects to store data
Aspects of the innovations herein are consistent with a storage system for storing variable sized objects. According to certain implementations, the storage system may be a transaction-based system that uses variable sized objects to store data, and/or may be implemented using data stores, such as arrays disks arranged in ranks. In some exemplary implementations, each rank may include multiple stripes, each stripe may be read and written as a convenient unit for maximum performance, and/or a rank manager may be provided to dynamically configure the ranks. In certain implementations, the storage system may include a stripe space table that contains entries describing the amount of space used in each stripe. Further, an object map may provide entries for each object in the storage system describing the location, the length and/or version of the object.
Persistent memory file system reconciliation
Techniques are provided for persistent memory file system reconciliation. As part of the persistent memory file system reconciliation, high level file system metadata associated with a persistent memory file system of persistent memory is reconciled. Client access to the persistent memory file system is inaccessible until reconciliation of the high level file system metadata has completed. A first scanner is executed to traverse pages of the persistent memory in order to fix local inconsistencies associated with the pages. A local inconsistency of a first set of metadata or data of a page is fixed using a second set of metadata or data of the page. The first scanner is executed asynchronously in parallel with processing client I/O directed to the persistent memory file system.
Mirroring data to survive storage device failures
Ensuring resiliency to storage device failures in a storage system, including: determining a number of storage device failures within a particular write group that are to be tolerated by the storage system; for a plurality of datasets stored within the storage system, writing each dataset to at least a predetermined number of storage devices within the particular write group, wherein the predetermined number of storage devices is greater than the number of storage device failures within the particular write group that are to be tolerated by the storage system; and responsive to recovering from a system interruption: determining a number of readable storage devices that contain a copy of the dataset; and if the number of readable storage devices that contain a copy of the dataset is not greater than the number of failures that are to be tolerated, writing the dataset to one or more additional storage devices.
Redirecting access requests between access engines of respective disk management devices
Embodiments that process data are described. For instance, a method includes receiving, at a first disk management device in a storage system, an access request for accessing data in a plurality of disks associated with the storage system. The method further includes determining whether a first access engine for accessing the plurality of disks in the first disk management device is available. The method further includes redirecting the access request to a second disk management device in the storage system if it is determined that the first access engine is unavailable, wherein a second access engine in the second disk management device is available to access the plurality of disks. By means of this method, effective data access can be performed when an access engine of a disk management device is unavailable, thus realizing a more stable access capability and improving the user experience.
METHOD AND SYSTEMS FOR PROVIDING BACK UP FOR A DATABASE
There is provided a database system and a method for providing a backup for a database. A database is stored at a primary server. A snapshot of the database is communicated from the primary server to a secondary server, and the snapshot of the database is stored at the secondary server. At least one request to make a change to the database is received at a publisher server, and the publisher server transmits the at least one request to each of a message queue associated with the primary server and a message queue associated with the secondary server. The database at the primary server is updated by processing each requested change.
Virtualized file server user views
In one embodiment, a system for managing a virtualization environment includes a plurality of host machines, wherein each of the host machines comprises a hypervisor and one or more user virtual machines (user VMs), and a virtual machine controller, one or more virtual disks comprising a plurality of storage devices, a virtualized file server (VFS) comprising a plurality of file server virtual machines (FSVMs), wherein each of the FSVMs is running on one of the host machines. The VFS may be configured to receive a request for storage system information from a user and generate and send a response to the request, wherein the response is customized according to configuration information of the VFS that is specific to the user. The storage system information requested may include a total size of storage available to the user, and the user may have an associated storage quota limit.
Failure abatement approach for a failed storage unit
A method for execution by a vault management device of a storage network includes determining a failure impact level to vaults of the storage network based on a failed storage unit within the vaults, where the vaults include a first vault that is associated with a first set of storage units and a first decode threshold number, and a second vault that is associated with a second set of storage units and a second decode threshold number, and where the failure impact level is based on the number of non-failed storage units within each of the vaults. The method continues with determining a failure abatement approach based on the failure impact level. The method continues by with facilitating the failure abatement approach.
FAULT TOLERANCE USING SHARED MEMORY ARCHITECTURE
Examples provide a fault tolerant virtual machine (VM) using pooled memory. When fault tolerance is enabled for a VM, a primary VM is created on a first host in a server cluster. A secondary VM is created on a second host in the server cluster. Memory for the VMs is maintained on a shared partition in pooled memory. The pooled memory is accessible to all hosts in the cluster. The primary VM has read and write access to the VM memory in the pooled memory. The secondary VM has read-only access to the VM memory. If the second host fails, a new secondary VM is created on another host in the cluster. If the first host fails, the secondary VM becomes the new primary VM and a new secondary VM is created on another host in the cluster.