Patent classifications
G06F2212/224
Data Prefetching Method, Computing Node, and Storage System
A data prefetching method includes a computing node obtaining information about accessing a storage node by a first application in a preset time period. The computing node determines information about prefetch data based on the access information. The computing node determines, based on the information about the prefetch data, a cache node prefetching the prefetch data, and generates a prefetch request for prefetching the prefetch data. The computing node sends the prefetch request to the cache node. The cache node performs a prefetching operation on the prefetch data in response to the prefetch request.
GATEWAY FOR CLOUD-BASED SECURE STORAGE
The systems and methods disclosed herein transparently provide an improved scalable cloud-based dynamically adjustable or configurable storage volume. In one aspect, a gateway provides a dynamically or configurably adjustable storage volume, including a local cache. The storage volume may be transparently adjusted for the amount of data that needs to be stored using available local or cloud-based storage. The gateway may use caching techniques and block clustering to provide gains in access latency compared to existing gateway systems, while providing scalable off-premises storage.
Dynamic result set caching with a database accelerator
According to one embodiment of the present invention, a system for processing a database query stores one or more result sets for one or more first database queries in a data store. The system receives a second database query and compares the second database query to the one or more first database queries to determine presence of a corresponding result set in the data store for the second database query. The system provides the corresponding result set from the data store for the second database query based on the comparison. Embodiments of the present invention further include a method and computer program product for processing a database query in substantially the same manners described above.
Sense operation flags in a memory device
In a memory device, odd bit lines of a flag memory cell array are connected with a short circuit to a dynamic data cache. Even bit lines of the flag memory cell array are disconnected from the dynamic data cache. When an even page of a main memory cell array is read, the odd flag memory cells, comprising flag data, are read at the same time so that it can be determined whether the odd page of the main memory cell array has been programmed. If the flag data indicates that the odd page has not been programmed, threshold voltage windows can be adjusted to determine the states of the sensed even memory cell page.
METHOD AND STORAGE ARRAY FOR PROCESING A WRITE DATA REQUEST
According to a write data request processing method and a storage array provided in the embodiments of the present invention, a controller is connected to a cache device via a switching device, an input/output manager is connected to the controller via the switching device, and the input/output manager is connected to a cache device via the switching device. The controller obtains a cache address from the cache device for to-be-written data according to the write data request, the controller sends an identifier of the cache device and the cache address to the input/output manager via the switching device, and the input/output manager writes the to-be-written data to the cache address via the switching device.
Considering a frequency of access to groups of tracks and density of the groups to select groups of tracks to destage
Provided are a computer program product, system, and method for considering a frequency of access to groups of tracks and density of the groups to select groups of tracks to destage. One of a plurality of densities for one of a plurality of groups of tracks is incremented in response to determining at least one of that the group is not ready to destage and that one of the tracks in the group in the cache transitions to being ready to destage. A determination is made of a group frequency indicating a frequency at which tracks in the group are modified. At least one of the density and the group frequency is used for each of the groups to determine whether to destage the group. The tracks in the group in the cache are destaged to the storage in response to determining to destage the group.
Dynamic cache management in storage devices
Technologies are provided for dynamically changing a size of a cache region of a storage device. A storage device controller writes data to the cache region of the storage device using a particular storage format. The storage device controller then migrates the cached data to a storage region of the device, where the data is written using a different storage format. A dynamic cache manager monitors input and output activity for the storage device and dynamically adjusts a size of the cache region to adapt to changes in the input and/or output activity. The dynamic cache manager can also adjust a size of the storage region. The storage device controller can automatically detect that the storage device has dynamic cache support and configure the storage device by creating the cache region and the storage region on the device.
Considering a density of tracks to destage in groups of tracks to select groups of tracks to destage
Provided are a computer program product, system, and method for considering a density of tracks to destage in groups of tracks to select groups of tracks to destage. Groups of tracks in the cache are scanned to determine whether they are ready to destage. A determination is made as to whether the tracks in one of the groups are ready to destage in response to scanning the tracks in the group. A density for the group is increased in response to determining that the group is not ready to destage. The group is destaged in response to determining that the density of the group exceeds a density threshold.
Disk cache allocation
Implementations disclosed herein provide a method comprising determining a workload on a disk cache with a storage device controller, determining a state of a free pool of the disk cache, receiving a data write request to the disk cache, segregating the free pool of the disk cache into a plurality of allocation units, allocating the plurality of allocation units out of order, as compared to a physical arrangement order of the allocation units in the disk cache, based on the workload, and storing data in the plurality of allocation units.
Method and apparatus for storing data in a storage system that includes a final level cache (FLC)
A storage system includes a final level cache (FLC) module coupled to a storage medium. The storage medium includes a bulk storage portion having a higher data density than a cache storage portion. The cache storage portion is configured as an FLC cache accessed by the FLC module prior to accessing the bulk storage portion. The FLC module receives a request for data from a processor coupled to one or more levels of cache that are separate from the FLC cache. The processor generates the request if the data is not cached in the one or more levels of cache. The FLC module determines whether the data requested is cached in the FLC cache, retrieves the data from the FLC cache if the data is cached in the FLC cache, and retrieves the data from the bulk storage portion if the data is not cached in the FLC cache.