G06F3/0635

Methods and systems for managing quality of service in a networked storage environment

Methods and systems for a networked storage system are provided. One method includes assigning a quality of service (QOS) parameter for a storage volume of a networked storage environment having a first storage node and a second storage node, where the QOS parameter is defined by a throughput value that defines a maximum data transfer rate and a number of input/output (I/O) operations executed within a time period (IOPS); distributing the QOS parameter between the first storage node and the second storage node; determining that throughput credit is available for processing an I/O request for using the storage volume; determining that IOPS credit is available for processing the request by the first storage node; and processing the I/O request when both the throughput credit and the IOPS credit is available.

Redundant memory access for rows or columns containing faulty memory cells in analog neural memory in deep learning artificial neural network

Numerous embodiments are disclosed for accessing redundant non-volatile memory cells in place of one or more rows or columns containing one or more faulty non-volatile memory cells during a program, erase, read, or neural read operation in an analog neural memory system used in a deep learning artificial neural network.

Balancing Data Transfer Amongst Paths Between A Host and A Storage System
20230236767 · 2023-07-27 ·

Managing input/output (‘I/O’) queues in a data storage system, including: receiving, by a host that is coupled to a plurality of storage devices via a storage network, a plurality of I/O operations to be serviced by a target storage device; determining, for each of a plurality of paths between the host and the target storage device, a data transfer maximum associated with the path; determining, for one or more of the plurality of paths, a cumulative amount of data to be transferred by I/O operations pending on the path; and selecting a target path for transmitting one or more of the plurality of I/O operations to the target storage device in dependence upon the cumulative amount of data to be transferred by I/O operations pending on the path and the data transfer maximum associated with the path.

Credential manager with account selection and resource load-balancing
11714551 · 2023-08-01 · ·

The described technology is generally directed towards managing accounts for connecting applications to (e.g., third party) cloud storage providers. Various types of cloud storage providers and different accounts, e.g. corresponding to different usage scenarios with properties such as regions, storage tier levels, costs and so forth, are available to user applications. In one implementation, a user application provides desired account properties to a cloud credential manager via a REST API call to obtain the account information for an account, including credentials, configuration data and the like, returned in in a REST API response. The described technology facilitates selection of an account by the cloud credential manager based on matching the specified properties. Load balancing and storage costs can also be factors in the selection, and random selection is also available.

Data flow management in a heterogeneous memory device using a thermal profile

A computer-implemented method, a computer program product, and a computer system for data flow management in a heterogeneous memory device. A media controller redirects traffic from first non-volatile memory (NVM) to second NVM, in response to an instantaneous temperature of the first NVM reaches a first predetermined temperature at which redirecting the traffic is started. The media controller throttles to reduce the traffic to the second NVM, in response to determining that the instantaneous temperature is higher than a second predetermined temperature at which throttling is started. The media controller redirects the traffic back to the first NVM, in response to determining that the instantaneous temperature is not higher than the second predetermined temperature and lower than a third predetermined temperature at which throttling is ended. The first NVM is thermally sensitive, while the second NVM is thermally tolerant.

Dynamic latency management of active-active configurations using multi-pathing software

An apparatus comprises a host device that includes a multi-path input-output (MPIO) driver configured to control delivery of input-output (IO) operations from the host device to first and second storage systems over a plurality of paths through a network. The MPIO driver determines latency values for the paths to the first and second storage systems, retrieves additional information corresponding to the paths and first and second storage systems, generates a first message comprising at least portions of the latency values and additional information, and sends the first message to a multi-pathing management appliance. A second message is received from the multi-pathing management appliance, the second message being generated based on at least a portion of the first message. The MPIO driver selects one or more paths for delivery of given ones of the IO operations based at least in part on at least a portion of the second message.

CACHE MEMORY ARCHITECTURE AND MANAGEMENT

Aspects of the present disclosure relate to data cache management. In embodiments, a logical block address (LBA) bucket is established with at least one logical LBA group. Additionally, at least one LBA group is associated with two or more distinctly sized cache slots based on an input/output (IO) workload received by the storage array. Further, the association includes binding the two or more distinctly sized cache slots with at least one LBA group and mapping the bound distinctly sized cache slots in a searchable data structure. Furthermore, the searchable data structure identifies relationships between slot pointers and key metadata.

Selecting paths between a host and a storage system
11561730 · 2023-01-24 · ·

Managing input/output (‘I/O’) queues in a data storage system, including: receiving, by a host that is coupled to a plurality of storage devices via a storage network, a plurality of I/O operations to be serviced by a target storage device; determining, for each of a plurality of paths between the host and the target storage device, a data transfer maximum associated with the path; determining, for one or more of the plurality of paths, a cumulative amount of data to be transferred by I/O operations pending on the path; and selecting a target path for transmitting one or more of the plurality of I/O operations to the target storage device in dependence upon the cumulative amount of data to be transferred by I/O operations pending on the path and the data transfer maximum associated with the path.

Congestion Mitigation in a Distributed Storage System

A system comprises a plurality of computing devices that are communicatively coupled via a network and have a file system distributed among them, and comprises one or more file system request buffers residing on one or more of the plurality of computing devices. File system choking management circuitry that resides on one or more of the plurality of computing devices is operable to separately control: a first rate at which a first type of file system requests (e.g., one of data requests, data read requests, data write requests, metadata requests, metadata read requests, and metadata write requests) are fetched from the one or more buffers , and a second rate at which a second type of file system requests (e.g., another of data requests, data read requests, data write requests, metadata requests, metadata read requests, and metadata write requests) are fetched from the one or more buffers.

Storage System Based Monitoring and Remediation for Containers
20230229319 · 2023-07-20 ·

A storage system, associated with a container system, may be configured to perform a method that includes: providing, by the storage system to a container operating within a container system, one or more storage services; determining, by the storage system, an interruption to the one or more storage services; and providing, based on the interruption and from the storage system to a container orchestrator for the container system, an alert associated with the container.