Patent classifications
G06F3/0646
Modifying storage distribution in a storage system that includes one or more storage devices
Modifying storage distribution in a storage system that includes one or more storage devices, including: detecting, for a storage device among the one or more storage devices, that a storage capacity of the storage device is different from a storage capacity of another storage device of the one or more storage devices, and responsive to detecting that the storage capacity for the storage device is different from the storage capacity of the other storage devices of the one or more storage devices, modifying a distribution of shards of data for a data stripe among the one or more storage devices.
Efficient storage device data move operation based on priority of garbage collection command
Technologies are provided for a storage device data move command. A storage device can be configured to receive a data move (or garbage collection) command and, responsive to receiving the command, move data from one zone of the storage device (or range of storage locations within the storage device) to another zone (or another range of storage locations) within the storage device. The command can comprise a source zone identifier and a target zone identifier. The storage device can read data from a storage zone associated with the source zone identifier and write the data to another storage zone associated with the target zone identifier. The identifiers can include ranges of storage location addresses within the separate storage zones. In at least some embodiments, a host bus adapter can be configured to support the data move (or garbage collection) command for a storage device attached to the host bus adapter.
Utilizing Metadata Storage Trees in a Vast Storage Network
A method includes receiving data for storage and encoding the data to produce a plurality of data slices. Metadata is determined for a data slice of the plurality of data slices. The metadata is stored in a metadata storage tree. The metadata storage tree is stored via a first plurality of memory devices of a first memory type. The data slice is stored in a slice storage location in a second plurality of memory devices of a second memory type. The slice storage location is indicated by the metadata. The first memory type has a higher performance level than the second memory type based on a utilization approach.
DATA CENTER CLUSTER ARCHITECTURE
A data center cluster includes a plurality of host systems coupled to a network processing device by a Compute Express Link (CXL) switch, where the network processing device includes memory to implement a memory pool for the data center cluster. Request and responses are communicated within the data center cluster using the memory pool and the network processing device manages communication within the data center cluster.
Cooperative Storage Architecture
The present disclosure provides an interconnect architecture that enables communications and/or data transmissions among data storage drives in a computing system. The flash translation layer (FTL) in each data storage drive may be operated in a cooperative manner that allows communications and/or data transmissions across memory arrays from each of the data storage drives implemented in the computing system. The direct communications and/or data transmissions among the data storage drives in the computing system may be enabled without deferring back to a host computing device in the computing system. Thus, the computational load to the host computing device is reduced and the flexibility of scaling up the storage appliance in the computing system is increased.
LARGE DATA READ TECHNIQUES
Devices and techniques are disclosed herein for more efficiently exchanging large amounts of data between a host and a storage system. In an example, a large read operation can include receiving a pre-fetch command, a parameter list and a read command at a storage system. In certain examples, the pre-fetch command can provide an indication of the length of the parameter list, and the parameter list can provide location identifiers of the storage system from which the read command can sense the read data.
DATA PROTECTION WITH MULTIPLE SITE REPLICATION
Systems and methods for replicating data from a first site to a second site remote from said first site are described. An embodiment includes storing compressed data on a hard disk appliance, reading said data without decompressing said data, sending said data over a wide-area-network (WAN) in a compressed state, and storing said data on a second hard disk appliance remote from said first hard disk appliance in its compressed state without performing an additional compression operation.
Intelligent local management of data stream throttling in secondary-copy operations
Local management of data stream throttling in data movement operations, such as secondary-copy operations in a storage management system, is disclosed. A local throttling manager may interoperate with co-resident data agents and/or a media agent executing on any given local computing device, whether a client computing device or a secondary storage computing device. The local throttling manager may allocate and manage the available bandwidth for various jobs and their constituent data streams—across the data agents and/or media agent. Bandwidth is allocated and re-allocated to data streams used by ongoing jobs, in response to new jobs starting and old jobs completing, without having to pause and restart ongoing jobs to accommodate bandwidth adjustments. The illustrative embodiment also provides local users with a measure of control over data streams—to suspend, pause, and/or resume them—independently from the centralized storage manager that manages the overall storage system.
Dynamic management of effective bandwidth of data storage operations
Intelligent data throttling in data movement operations, such as secondary-copy operations in a storage management system. A local throttling manager may intelligently interoperate with co-resident data agents and/or a media agent executing on any given local computing device, whether a client computing device or a secondary storage computing device. The local throttling manager may allocate and manage the available bandwidth for various jobs and their constituent data streams—across the data agents and/or media agent. Effective bandwidth for the secondary-copy operation may be adjusted based on available bandwidth from the computing device due to increased demand for the bandwidth from other operations.
Executing A Machine Learning Model In An Artificial Intelligence Infrastructure
Executing a machine learning model in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: receiving, by a graphical processing unit (‘GPU’) server, a dataset transformed by a storage system that is external to the GPU server; and executing, by the GPU server, one or more machine learning algorithms using the transformed dataset as input.