Patent classifications
G06F3/0617
SNAPSHOT SHIPPING TO MULTIPLE CLOUD DESTINATIONS
An apparatus comprises at least one processing device configured to identify a snapshot lineage comprising snapshots of a storage volume, the snapshot lineage comprising (i) a local snapshot lineage stored on a storage system and (ii) cloud snapshot lineages stored on cloud storage external to the storage system, to select at least one snapshot that is to be copied from the local snapshot lineage, to determine at least two of the cloud snapshot lineages as destinations for the selected snapshot, to generate a snapshot copy job for copying the selected snapshot to the at least two cloud snapshot lineages, and to process the snapshot copy job by reading data of the selected snapshot stored in the local snapshot lineage once and writing the data of the selected snapshot to the at least two cloud snapshot lineages.
Synchronous Workload Optimization
An illustrative method includes receiving a write request to write payload data to a virtual storage volume; transmitting the write request to a plurality of storage nodes each storing a replica of the virtual storage volume; acknowledging the write request only after a quorum of the storage nodes has stored the payload in their respective kernel memory; and flushing the payloads stored in each kernel memory to persistent storage only after a threshold number of outstanding write requests that have been acknowledged, but not yet flushed, has been reached, the flushing configured to optimize performance for synchronous workloads.
Method and system for processing commands in storage devices to improve quality of service
Operation of a non-volatile memory (NVM) storage module may comprise receiving a plurality of commands as associated with a plurality of priority-based queues from a host-memory. A received command is evaluated in accordance with a priority associated with the queue storing the command and a size of the command. The evaluated command is split into a plurality of sub-commands, each of the sub-commands having a size determined in accordance with the evaluation. A predetermined number of hardware resources are allocated for each of the evaluated command based on at least the size of each of the sub-commands to thereby enable a processing of the evaluated command based on the allocated resources. Quality of service (QoS) for the evaluated-command may thus be augmented.
Technologies for providing shared memory for accelerator sleds
Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
SELECTIVE MULTITHREADED EXECUTION OF MEMORY TRAINING BY CENTRAL PROCESSING UNIT(CPU) SOCKETS
Embodiments described herein are generally directed to selective multithreaded execution of memory training by CPU sockets. In an example, a memory configuration and a current phase of execution of memory training for each of multiple CPU sockets of a computer system is received. Based on the memory configuration and the current phase of execution of each of the CPU sockets an estimated power usage across all CPU sockets may be determined. Based on the estimated power usage and a power consumption threshold (e.g., PTAM or PA), performance of the current phase of execution of one or more CPU sockets may be selectively released for one or more channels of the one or more CPU sockets.
Managing client devices associated with storage nodes in a scale-out storage system
Client devices associated with scale-out storage nodes can be managed based on scale-out storage nodes having backup power supplies. For example, a management node of a scale-out storage system can determine, from among a plurality of storage nodes of the scale-out system, that a first storage node is uncoupled to a backup power supply and that a second storage node is coupled to the backup power supply. The management node can receive device characteristics describing a type of workload and a configuration for a client device associated with the first storage node. The management node can determine the client device satisfies a migration policy based on the device characteristics. The management node can migrate the client device to the second storage node based on the client device satisfying the migration policy.
Converting raid data between persistent storage types
Converting RAID data between persistent storage types, including: for each portion of a RAID shard of a RAID stripe: writing, to a respective plurality of source solid state drives, the portion of the RAID shard; detecting that all portions of the RAID shard have been successfully written; copying, from one of the plurality of source solid state drives to a respective target solid state drive among a plurality of target solid state drives from one of the plurality of source solid state drives, the RAID shard, where the RAID shard is copied from a source solid state drive that is different from where each other RAID shard of the RAID stripe is copied from.
INFORMATION PROCESSING SYSTEM
According to an embodiment, when a storage status of a first storage unit is recognized as a protected state, a control unit writes data to a second storage unit. When a read target address is recorded in a data migration log area, the control unit reads data from the second storage unit. When the read target address is not recorded in the data migration log area, the control unit reads data from the first storage unit.
RESOURCE NODE INTERFACE PROTOCOL
A distributed storage system includes multiple resource nodes each having associated storage media. The resource nodes are configured to operate a first protocol between the resource nodes that exchanges availability and performance information for storage elements in the associated storage media. The resource nodes also operate a second protocol that dynamically distributes and redistributes data between the different resource nodes based on the availability and performance information for the storage elements. Relative distances may be identified between the different resource nodes and the second protocol may weight the availability and performance information based on the relative distances. The second protocol also may identify types of unshared use, shared use, and concurrent use for different portions of the data and distribute the portions of the data to other resource nodes based on the identified types of use.
SYSTEMS, DEVICES, APPARATUS, AND METHODS FOR TRANSPARENTLY INSERTING A VIRTUAL STORAGE LAYER IN A FIBRE CHANNEL BASED STORAGE AREA NETWORK WHILE MAINTAINING CONTINUOUS INPUT/OUTPUT OPERATIONS
A method of transparently inserting a virtual storage layer into a Fibre channel based storage area network (SAN) while maintaining continuous I/O operations is provided. A device is inserted between a host entity and a first storage device. The device identifies a plurality of first paths between the host entity and the first storage device, and defines a plurality of second paths by defining, for each first path among the plurality of first paths, a corresponding second path between the host entity and a second storage device. The device determines, for each of the plurality of first paths, a respective first state. The device establishes, for each of the second paths among the plurality of second paths, a second state based on the first state of the corresponding first path. The device redirects, to the second storage device, communications directed from the host entity to the first storage device, via the plurality of second paths.