Patent classifications
G06F2201/845
PROVISIONING OF PERFORMANCE STATES FOR CENTRAL PROCESSING UNITS (CPUS)
Systems, methods, and apparatuses disclosed herein can operate in different performance states that provide different energy performance tradeoffs and, in some embodiments, can dynamically switch between these different performance states. These systems, methods, and apparatuses can estimate specific timeframes that workloads are to be completed. These systems, methods, and apparatuses can identify one or more processes that are being executed to perform the workloads. These systems, methods, and apparatuses can dynamically provision one or more performance states from among these different performance states to execute the process to complete the workloads within the specific timeframes. These systems, methods, and apparatuses can dynamically provision the one or more performance states for the one or more process that optimizes power consumption and/or performance while completing the workloads within the specific timeframes.
Storage cluster
A plurality of storage nodes is provided. The plurality of storage nodes is configured to communicate together as a storage cluster. Each of the plurality of storage nodes includes nonvolatile solid-state memory. The plurality of storage nodes is configured to distribute user data and metadata associated with the user data throughout the plurality of storage nodes such that the plurality of storage nodes maintain the ability to read the user data, using erasure coding, despite a loss of one of the plurality of storage nodes. A chassis enclosing the plurality of storage nodes includes power distribution, a high speed communication bus and the ability to install one or more storage nodes which may use the power distribution and communication bus in some embodiments. A method for accessing user data in a plurality of storage nodes having nonvolatile solid-state memory is also provided.
MAINTAINING TWO-SITE CONFIGURATION FOR WORKLOAD AVAILABILITY BETWEEN SITES AT UNLIMITED DISTANCES FOR PRODUCTS AND SERVICES
A system for maintaining a two-site configuration for continuous availability over long distances may include a first computing site configured to execute a first instance associated with a priority workload, the first instance being designated as an active instance; a second computing site configured to execute a second instance of the priority workload, the second instance being designated as a standby instance; a software replication module configured to replicate a unit of work data associated with the priority workload from a first data object associated with the active instance to a second data object associated with the standby instance, and a hardware replication module configured to replicate an image from a first storage volume to a copy on a second storage volume, wherein the first storage volume is associated with the first computing site, and the second storage volume is associated with a third computing site.
Method and apparatus for reducing read latency
A method and an apparatus for reducing a read latency are provided. The method includes: when one or more flash chips corresponding to a read command are in a busy state, setting data read from the one or more flash chips in a busy state to wrong data; obtaining, according to the wrong data and data read from other flash chips, reconstructed correct data, and reporting the correct data. By using the present invention, data read from a flash chip is set to wrong data, and reconstructed correct data is obtained according to the wrong data and data read from other flash chips. In this way, when the flash chip is in a busy state, it can be avoided that a read operation is blocked by an erase operation or a write operation, thereby effectively reducing latency and improving a performance of a storage system.
Multi stream deduplicated backup of collaboration server data
Techniques to backup collaboration server data are disclosed. An indication to begin backup of a collaboration server dataset is received. An associated directory is walked in a prescribed order to divide the dataset into a prescribe number of approximately equal-sized subsets. A separate subset-specific thread is used to back up the subsets in parallel. In some embodiments in which the collaboration data is stored in multiple volumes, a volume-based approach is used to back up the volumes in parallel, e.g., one volume per thread. In some embodiments, transaction logs are backed up in parallel with volumes of collaboration data.
Distributed protocol endpoint services for data storage systems
A system is provided. The system includes a data storage system and a client device communicatively coupled to the data storage device. The client device includes a processing device to receive a data request directed to the data storage system, translate the data request to a backend protocol of the data storage system, and retrieve one or more portions of data from the data storage system based on the translated data request. In some embodiments, the processing device is a data processing unit of the client device dedicated to executing a protocol endpoint of the data storage system.
Dataset image creation
An application may store data to a dataset comprising a plurality of volumes stored on a plurality of storage systems. The application may request a dataset image of the dataset, the dataset image comprising a volume image of each volume of the dataset. A dataset image manager operates with a plurality of volume image managers in parallel to produce the dataset image, each volume image manager executing on a storage system. The plurality of volume image managers respond by performing requested operations and sending responses to the dataset image manager in parallel. Each volume image manager on a storage system may manage and produce a volume image for each volume of the dataset stored to the storage system. If a volume image for any volume of the dataset fails, or a timeout period expires, a cleanup procedure is performed to delete any successful volume images.
Data processing network for performing reliable data processing
A data processing network is for performing a plurality of successive data processing steps in a redundant and validated manner. The data processing steps are each used to generate output data from input data. At least some output data from a first data processing step are at the same time input data of a further data processing step. At least a first data processing module and a second data processing module are provided for performing each data processing step. The data processing network includes a comparator module. The first data processing module and the second data processing module are configured to perform the data processing steps, optionally in a first working mode with parallel operation, or in a second working mode with an upstream data processing module and a downstream data processing module.
UTILIZING DATA PROCESSING UNITS TO OPTIMIZE PERFORMANCE OF AN ARTIFICAL INTELLIGENCE STORAGE SYSTEM
A mapping of portions of a dataset that designates one or more managed flash storage devices of a storage system that store corresponding portions of the dataset are transmitted by one or more storage controllers to one or more data processing units (DPUs). A request for accessing a portion of the dataset stored at a particular managed flash storage device indicated by the mapping of the portions of the dataset is received from a DPU of the one or more DPUs.
Storage cluster utilizing differing load balancers
A storage system is provided. The storage system includes a first storage cluster, the first storage cluster having a first plurality of storage nodes coupled together and a second storage cluster, the second storage cluster having a second plurality of storage nodes coupled together. The system includes an interconnect coupling the first storage cluster and the second storage cluster and a first pathway coupling the interconnect to each storage cluster. The system includes a second pathway, the second pathway coupling at least one fabric module within a chassis to each blade within the chassis.