Patent classifications
G06F2201/845
Regaining redundancy in distributed raid arrays using unallocated capacity
A method and system are provided for spare capacity usage for critical redundancy in storage arrays. The method may include monitoring a Redundant Array of Independent Disks (RAID) array to determine whether one or more redundancy units are at a critical level. A redundancy unit may be in a critical level when an additional drive failure will result in loss of data from the redundancy unit. The method may further include identifying available regions in the RAID array which are not allocated to user data in response to determining that a particular redundancy unit is critical. The method may further include determining an available region for the particular redundancy unit, where the available region is in a drive of the RAID array that does not contain data of the particular redundancy unit. The method may further include storing a critical stripe in the available region.
STORAGE CLUSTER
A method for managing processing power in a storage system is provided. The method includes providing a plurality of blades, each of a first subset having a storage node and storage memory, and each of a second, differing subset having a compute-only node. The method includes distributing authorities across the plurality of blades, to a plurality of nodes including at least one compute-only node, wherein each authority has ownership of a range of user data.
STORAGE CLUSTER UTILIZING DIFFERING LOAD BALANCERS
A storage system is provided. The storage system includes a first storage cluster, the first storage cluster having a first plurality of storage nodes coupled together and a second storage cluster, the second storage cluster having a second plurality of storage nodes coupled together. The system includes an interconnect coupling the first storage cluster and the second storage cluster and a first pathway coupling the interconnect to each storage cluster. The system includes a second pathway, the second pathway coupling at least one fabric module within a chassis to each blade within the chassis.
Distribution of resources for a storage system
A method for managing processing power in a storage system is provided. The method includes providing a plurality of blades, each of a first subset having a storage node and storage memory, and each of a second, differing subset having a compute-only node. The method includes distributing authorities across the plurality of blades, to a plurality of nodes including at least one compute-only node, wherein each authority has ownership of a range of user data.
Scale out storage platform having active failover
A storage system that has blades and fabric modules connects to a customer legacy network that has a first, active switch and a second, passive switch. A first link aggregation group (LAG) is configured active and includes ports of the first, active switch that connect via links to the first and second fabric modules of the storage system. A second LAG is configured passive and includes ports of the second, passive switch that connect via links to the first and second fabric modules. A multi-chassis link aggregation group (MLAG, MCLAG or MC-LAG) is configured and includes ports of the first and second fabric modules that connect via links to the first and second switches.
Storage cluster
A plurality of storage nodes in a single chassis is provided. The plurality of storage nodes in the single chassis is configured to communicate together as a storage cluster. Each of the plurality of storage nodes includes nonvolatile solid-state memory for user data storage. The plurality of storage nodes is configured to distribute the user data and metadata associated with the user data throughout the plurality of storage nodes such that the plurality of storage nodes maintain the ability to read the user data, using erasure coding, despite a loss of two of the plurality of storage nodes. A plurality of compute nodes is included in the single chassis, each of the plurality of compute nodes is configured to communicate with the plurality of storage nodes. A method for accessing user data in a plurality of storage nodes having nonvolatile solid-state memory is also provided.
DISTRIBUTED COMPUTING UTILIZING A RECOVERY SITE
A recovery site is configured to process a task using a copy of an original file associated with the task. The original file is stored on a production site, and a copy of the original file is stored on a recovery site. The task is determined to be suitable for processing on the recovery site. The original file is determined to match the copy of the original file based on a modification time associated with the original file being earlier than a copy time associated with the copy of the original file. The task is processed on the recovery site using the copy of the original file, and at least one result file is output.
Management system of server system including a plurality of servers
For each of N active servers (N being an integer equal to or larger than 2), a management system performs, on at least one of M standby servers (M being an integer equal to or larger than 2), a full test for determining whether a failover is executable by performing a failover from the active server to the standby server, and performs, on at least one of the standby servers that is different from the standby server on which the full test is performed, a simplified test for determining whether the failover is executable without performing the failover from the active server to the standby server, the number of standby servers on which the simplified test is performed being larger than the number of standby servers on which the full test is performed.
DISTRIBUTED COMPUTING UTILIZING A RECOVERY SITE
A recovery site is configured to process a task using a copy of an original file associated with the task. The original file is stored on a production site, and a copy of the original file is stored on a recovery site. The task is determined to be suitable for processing on the recovery site. The original file is determined to match the copy of the original file based on a modification time associated with the original file being earlier than a copy time associated with the copy of the original file. The task is processed on the recovery site using the copy of the original file, and at least one result file is output.
MULTIPROCESSOR SYSTEM AND VEHICLE CONTROL SYSTEM
It is possible to achieve monitoring of a processor element while suppressing the cost. A multiprocessor system 1 includes a bus mechanism including a storage unit 6 configured to store bus access information when a first processor element 2 has executed a process to be monitored, a requesting unit 7 configured to request a second processor element 3 to execute a monitoring process after the first processor element 2 has completed the execution of the process to be monitored, and a comparing unit 8 configured to compare bus access information regarding access of the first processor element 2 stored in the storage unit 6 with bus access information input from the second processor element 3 when the second processor element 3 has executed the monitoring process. The second processor element 3 executes the monitoring process in an idle time.