Patent classifications
G06F3/0629
Distributed raid rebuild
A technique is disclosed for generating rebuild data of a RAID configuration having one or more failed drives. The RAID configuration includes multiple sets of drives coupled to respective computing nodes, and the computing nodes are coupled together via a network. A lead node directs rebuild activities, communicating with the other node or nodes and directing such node(s) to compute partial rebuild results. The partial rebuild results are based on data of the drives of the RAID configuration coupled to the other node(s). The lead node receives the partial rebuild results over the network and computes complete rebuild data based at least in part on the partial rebuild results.
Key value store using progress verification
Example storage systems and methods provide data storage management using a key data store with progress values. A key data store includes a set of key data entries that each include a key value associated with a storage operation and a timestamp corresponding to a creation time of the key data entry. Storage management processes are executed on the set of key data entries and progress values for the storage management processes are tracked using the timestamps of the key data entries to manage the relative progress of the storage management processes.
Efficient management of optimal read levels for flash storage systems
Tuning information associated with a storage device of a plurality of storage devices is received. One or more characteristics associated with the storage device are determined. The tuning information and the one or more characteristics are provided to the plurality of storage devices, wherein providing the tuning information causes a set of the plurality of storage devices to apply the tuning information based on the one or more characteristics.
System and method of wear leveling information handling systems of a storage cluster
In one or more embodiments, one or more systems, one or more methods, and/or one or more processes may: receive performance information for a base workload; determine multiple threshold values of multiple storage media layers of each IHS of a storage cluster based at least on the performance information for the base workload and multiple inventory information respectively associated with multiple storage media layers of each IHS of the storage cluster; receive multiple condition values respectively associated with the multiple storage media layers of an IHS of the storage cluster; determine that a condition value of the multiple condition values associated with a storage media layer of the multiple storage media layers is at or below a threshold value of the multiple threshold values associated with the storage media layer of the multiple storage media layers; and reduce a storage workload of a specific IHS of the storage cluster.
System and method of estimating performance headroom in a storage system
Techniques for estimating performance metrics of standalone or clustered storage systems. The techniques include receiving a request from a storage client for an estimated capacity or capability of a storage system to handle a specified workload pattern within a specified periodicity interval, in which the estimated capacity or capability of the storage system is represented by a headroom metric. The techniques further include, in response to the request from the storage client, obtaining a value of the headroom metric for the specified periodicity interval using a performance model characterized by at least a peak load reserve (PLR) metric and a long-term load reserve (LLR) metric, in which the obtained value of the headroom metric corresponds to the minimum of respective values of at least the PLR metric and the LLR metric. The techniques further include upgrading, scaling-up, and/or scaling-out the storage system based on the value of the headroom metric.
INDEPENDENT PLANE ARCHITECTURE IN A MEMORY DEVICE
A memory device includes a memory array comprising a plurality of memory planes, wherein the plurality of memory planes are arranged in a plurality of independent plane groups, and wherein each of the plurality of independent plane groups comprises one or more of the plurality of memory planes. The memory device further includes a plurality of independent analog driver circuits coupled to the memory array, wherein a respective one of the plurality of independent analog driver circuits is associated with a respective one of the plurality of independent plane groups. The memory device further includes a common analog circuit coupled to the memory array, wherein the common analog circuit is shared by the plurality of independent analog driver circuits and the plurality of independent plane groups. The memory device further includes a plurality of control logic elements, wherein a respective one of the plurality of control logic elements is associated with a respective one of the plurality of independent analog driver circuits and a respective one of the plurality of independent plane groups.
ADJUSTABLE DATA PROTECTION SCHEME USING ARTIFICIAL INTELLIGENCE
Apparatuses and methods can be related to implementing adjustable data protection schemes using artificial intelligence. Implementing adjustable data protection schemes can include receiving failure data for the plurality of memory devices and receiving an indication of a failure of a stripe of the plurality of memory devices based on the failure data. Based on failure data, and the indication of the failure of the stripe of the plurality of memory devices, a data protection scheme adjustment can be generated for the memory device. The data protection scheme adjustment can be received from the AI accelerator and can be implemented by a plurality of memory devices.
Tracking storage consumption in a storage array
Attributing consumed storage capacity among entities storing data in a storage array includes: identifying a data object stored in the storage array and shared by a plurality of entities, where the data object occupies an amount of storage capacity of the storage array; and attributing to each entity a fractional portion of the amount of storage capacity occupied by the data object.
Memory protocol with command priority
The present disclosure includes apparatuses and methods related to a memory protocol with command priority. An example apparatus can execute a command that includes a read identification (RID) number based on a priority assigned to the RID number in a register. The apparatus can be a non-volatile dual in-line memory module (NVDIMM) device.
Dynamic scheduling of distributed storage management tasks using predicted system characteristics
Systems and methods for scheduling storage management tasks over predicted user tasks in a distributed storage system. A method commences upon receiving a set of historical stimulus records that characterize management tasks that are run in the storage system. A corresponding set of historical response records comprising system metrics associated with execution of the system tasks is also received. A learning model is formed from the stimulus records and the response records and formatted to be used as a predictor. A set of forecasted user tasks is input as new stimulus records to the predictor to determine a set of forecasted system metrics that would result from running the forecasted user tasks. Management tasks are selected so as not to impact the forecasted user tasks. Management tasks can be selected based on non-contentions resource usage between historical management task resource usage and predictions of resource usage by the user tasks.