Patent classifications
G06F2212/263
Writing Data Using References To Previously Stored Data
A system and method comprising: receiving a request to write data stored at a first range of a first volume to a second range of a second volume, where first metadata for the first range of the first volume is associated with a range of physical addresses where the data is stored in the storage system; and responsive to receiving the request: creating second metadata for the second range of the second volume, wherein the second metadata is associated with the range of physical addresses where the data is stored in the storage system; and associating the second volume with the second metadata.
Local cached data coherency in host devices using remote direct memory access
A first host device establishes connectivity to a logical storage device of a storage system. The first host device obtains from the storage system host connectivity information identifying at least a second host device that has also established connectivity to the logical storage device, caches one or more extents of the logical storage device in a memory of the first host device, and maintains local cache metadata in the first host device regarding the one or more extents of the logical storage device cached in the memory of the first host device. In conjunction with processing of a write operation of the first host device involving at least one of the one or more cached extents of the logical storage device, the first host device invalidates corresponding entries in the local cache metadata of the first host device and in local cache metadata maintained in the second host device.
Data placement in write cache architecture supporting read heat data separation
A computer-implemented method, according to one approach, includes: determining a current read heat value of each logical page which corresponds to write requests that have accumulated in a destage buffer. Each of the write requests is assigned to a respective write queue based on the current read heat value of each logical page which corresponds to the write requests. Moreover, each of the write queues correspond to a different page stripe which includes physical pages, the physical pages included in each of the respective page stripes being of a same type. Other systems, methods, and computer program products are described in additional approaches.
Storage system including storage nodes to determine cache allocations to implement cache control
To improve performance of a storage system. The storage system includes a plurality of storage nodes that communicate via a network. Each of the plurality of storage nodes includes one or more controllers. At least one controller in the controllers specifies at least two controllers that allocate a cache sub-area where write data is stored based on a controller that receives the write data from a host and a controller that processes the write date, and the cache sub-area is allocated in the specified controllers.
Distributed columnar data set retrieval
An apparatus includes a processor to: instantiate data buffers of a queue, reading threads, and provision threads; within each reading thread, use an identifier provided in a data buffer of the queue to retrieve the corresponding data set part and part metadata from storage device(s), and store both within the data buffer; operate the queue as a (FIFO) buffer; within each provision thread, retrieve a row group from among multiple row groups and corresponding metadata from within the data buffer, use information in the metadata to decompress at least one column, and provide the data values of the row group to the requesting device or an application routine; and in response to each instance of storage of a data set part within a data buffer of the queue, analyze the availability of storage space and/or of processing resources to determine whether to dynamically adjust the quantity of reading threads.
Cascading PID controller for metadata page eviction
In a storage system that implements metadata paging, the page free pool is replenished in the background to reduce foreground evictions and associated latency on page-in. A two-level page eviction controller with cascaded proportional, integral, derivative (PID) controllers optimizes the size of the free page pool and optimizes the rate at which pages are freed in the background. By optimizing these two parameters the page eviction controller dynamically maximizes used pages (minimizing free pages) to increase the metadata cache hit ratio. Optimizing the parameters also reduces the chances of foreground page evictions, thereby reducing IO latency, during both steady state and burst page-in requests.
Lightweight Copying Of Data Using Metadata References
A system and method comprising: receiving a request to write data stored at a first range of a first volume to a second range of a second volume, where first metadata for the first range of the first volume is associated with a range of physical addresses where the data is stored in the storage system; and responsive to receiving the request: creating second metadata for the second range of the second volume, wherein the second metadata is associated with the range of physical addresses where the data is stored in the storage system; and associating the second volume with the second metadata.
Content distribution network supporting popularity-based caching
A content delivery network may provide content items to requesting devices using a popularity-based distribution hierarchy. A central analysis system may determine popularity data for a content item stored in a first caching device. The central analysis system may determine that a change in the popularity data is beyond a threshold value. The central analysis system may then transmit an instruction to move the content item from the first caching device to a second caching device in a different tier of caching devices than the first caching device. The central analysis system may update a content index to indicate that the content item has been moved to the second caching device. A user device may be redirected to request the content item directly from the second caching device.
Zero copy method that can span multiple address spaces for data path applications
A system and method for transferring data between a user space buffer in the address space of a user space process running on a virtual machine and a storage system are described. The user space buffer is represented as a file with a file descriptor. In the method, a file system proxy receives a request for I/O read or write from the user space process without copying data to be transferred. The file system proxy then sends the request to a file system server without copying data to be transferred. The file system server then requests that the storage system perform the requested I/O directly between the storage system and the user space buffer, the only transfer of data being between the storage system and the user space buffer.
Real-time analysis for dynamic storage
One or more techniques and/or systems are provided for dynamically provisioning logical storage pools of storage devices for applications. For example, a logical storage pool, of one or more storage devices, may be constructed based upon a service level agreement for an application (e.g., an acceptable latency, an expected throughput, etc.). Real-time performance statistics of the logical storage pool may be collected and evaluated against the service level agreement to determine whether a storage device does not satisfy the service level agreement. For example, a latency of a storage device within the logical storage pool may increase overtime as log files and/or other data of the application increase. Accordingly, a new logical storage pool may be automatically and dynamically defined and provisioned for the application to replace the logical storage pool. The new logical storage pool may comprise storage devices expected to satisfy the storage level agreement.