G06F2206/1012

SYSTEMS AND METHODS FOR DESIGNATING STORAGE PROCESSING UNITS AS COMMUNICATION HUBS AND ALLOCATING PROCESSING TASKS IN A STORAGE PROCESSOR ARRAY
20170329640 · 2017-11-16 ·

Systems and methods for designating a storage processing unit as a communication hub(s) in a SSD storage system are provided. The storage system can include a host, storage processing units (SPUs), and a host interface to enable communications between host and SPUs. One such method involves receiving a processing task including multiple threads to be performed, determining a baseline configuration for scheduling execution of threads on SPUs and a baseline cost function, marking one SPU as a communication hub, rescheduling, if a thread scheduled for execution on any of the other SPUs is decomposable to multiple sub-threads, a first sub-thread for execution on marked SPU, evaluating a second cost function for performing the processing task, including first sub-thread rescheduled on marked SPU, based on same factors as the baseline cost function, and unmarking, if baseline cost function is less than second cost function, the marked SPU.

INFORMATION PROCESSING SYSTEM AND MEMORY SYSTEM

A memory system includes a memory device including memory chips and a controller. The controller includes first processors configured to perform first processing of network packets in at least one of a network layer and a transport layer of a network protocol, and second processors configured to perform second processing with respect to the memory chips. The controller is configured to extract tag information from a header of a network packet, select one of the first processors associated with a first memory chip that is identified based on the tag information, and control the selected one of the first processors to perform the first processing with respect to the network packet, which causes one of the second processors associated with the first memory chip to perform the second processing based on a payload of the network packet.

Systems and methods for balancing multiple partitions of non-volatile memory
11256436 · 2022-02-22 · ·

Systems and methods for balancing multiple partitions of non-volatile memory devices are provided. Embodiments discussed herein execute a balance proportion scheme in connection with a NVM that is partitioned to have multiple partition types. Each partition type has an associated endurance that defines an average number of program/erase (P/E) cycles it can endure before it reaches failure. For example, a first partition type may have a substantially greater endurance than a second partition type. The balance proportion scheme ensures that, even though each partition type has a different associated endurance, all partition types are used proportionally with respect to each other to balance their respective P/E cycles. This way, both partition types will reach the upper limits of their respective endurance levels out at approximately the same time.

Storage system

A storage system includes controllers, physical storage devices, and logical storage devices to which storage areas are assigned from the physical storage devices. The controllers each have the physical storage device being different from that of each other and the host connected thereto being different from that of each other, and are each able to access the physical storage device and the host that are not connected thereto, through another controller. Any one of the controllers has an ownership to process an access request concerning the logical storage device. At least one controller determines that the ownership is caused to be moved among the controllers, based on an index of accesses to the physical storage device that includes the storage area to be assigned to the logical storage device and an index of accesses to the host that accesses the logical storage device.

Electronic storage system
11237727 · 2022-02-01 · ·

Methods and systems for electronic storage are provided. A storage system comprises a plurality of storage system front ends, a plurality of storage system back ends, and a plurality of solid state drive (SSD) agents. Each storage system front end resides on a server of a plurality of servers. Each server of the plurality of servers comprises one or more storage system back ends of the plurality of storage system back ends. Each storage system front end is able to receive I/O requests and relay information associated with the I/O requests to a relevant storage system back end. The relevant storage system back end communicates metadata associated with the I/O request to an SSD via an SSD agent.

MULTI-STORAGE DEVICE LIFECYCLE MANAGEMENT SYSTEM
20210405906 · 2021-12-30 ·

A multi-storage device lifecycle management system includes a server computing system having a plurality of devices and an operating system engine. The operating system engine identifies an estimated first device remaining lifetime for a first device, identifies an estimated second device remaining lifetime for a second device, and determines whether a difference between the estimated first device remaining lifetime and the estimated second device remaining lifetime is less than an estimated multi-device minimum end-of-lifetime difference. If so, the computing system distributes workload operations between the first device and the second device in order to cause the difference between the estimated first device remaining lifetime and the estimated second device remaining lifetime to be greater than or equal to the estimated multi-device minimum end-of-lifetime difference.

API and encryption key secrets management system and method

A hosted secrets management transport system and method for managing secrets at one or more offsite locations that facilitates secret flow, secret retrieval, and secret replication. The method includes defining boundaries for two or more sovereignties, each sovereignty having an independent master record and each sovereignty including two or more regions; defining a primary region within the two or more regions; accessing, within the primary region, a master record hardware security module that is a primary source of secrets; defining a second region; accessing, within the second region, a backup record hardware security module that is where data backups of the secrets from the master record hardware security module are created; and executing live replication from the master record hardware security module to the backup record hardware security module in which the live replication that supports multi-tenancy secret management of multiple distinct companies at the same time.

Storage device parameter monitoring for load balancing

Systems and methods for storage systems using storage device monitoring for load balancing are described. Storage devices may be configured for data access through a common data stream, such as the storage devices in a storage node or server. Data operations from the common data stream may be distributed among the storage devices using a load balancing algorithm. Performance parameter values, such as grown bad blocks, program-erase cycles, and temperature, may be received for the storage devices and used to determine variance values for each storage device. Variance values demonstrating degrading storage devices may be used to reduce the load allocation of data operations to the degrading storage devices.

Reconciliation of data in a distributed system
11360944 · 2022-06-14 · ·

Methods and systems are presented for providing data consistency in a distributed data storage system using an eventual consistency model. The distributed data storage system may store data across multiple data servers. To process a request for writing a first data value for a data field, a first data server may generate, for the first data value, a first causality chain representing a data replacement history for the data field leading to the first data value. The first data server may insert the first data value without deleting pre-existing data values from the data field. To process a data read request, multiple data values corresponding to the data field may be retrieved. The first data server may then select one data value based on the causality chains associated with the multiple data values for responding to the data read request.

Data distribution for fast recovery in cluster-based storage systems

The described technology is generally directed towards distributing data fragments and coding fragments of a protection group among storage entities (e.g., nodes or disks) based on affinity levels (e.g., maintained in an affinity matrix) that represent dependency relationships between the storage entities with respect to storing protection groups. The technology operates to distribute a protection group's components such that the affinity level between any pair of storage entities is approximately the same as any other pair. In the event of a storage entity failure, as a result of the affinity-based distribution of the protection group components needed for data recovery, a larger number of the other storage entities can be involved in the data recovery (relative to the number likely involved without affinity-based distribution). This tends to assure a better load balance and faster data recovery.